Showing posts sorted by relevance for query memory. Sort by date Show all posts
Showing posts sorted by relevance for query memory. Sort by date Show all posts

Thursday, September 01, 2011

RAM disk is already in Linux and nobody told you (a.k.a Shared memory for ordinary folk)

The Linux 2.6 Kernel these days comes with an in memory file system. This is really shared memory across processes ("Everything old is new again!").

The beauty of the Linux implementation (like everything else) is that this shared memory looks like a regular file system - /dev/shm (but is really Tmpfs). So, your application - yes even a Java application can use this shared memory system across processes as if it were just writing to a regular file. You can create directories, setup a memory quota for this file system, tail files, change to the directories etc like any other directory, open-write-close files that can be read by other processes. Convenient!

RAM disks are not a new concept. But a RAM disk driver that is built into the kernel adds a totally different level of credibility and trust to the concept.

All the contents of this directory are in memory. Nothing gets written to the disk, no flush, no IO wait times. Naturally, this being in-memory you will loose all the contents when your OS reboots. What possible use can this be to us, I hear you ask? Well well.. where do I begin:

  • Memory is cheap. 96GB RAM is quite common these days on server class machines
  • You can run a big RDBMS on this file system. Typically these databases are anyway replicated/clustered over the network
    • So you can run the entire DB in-memory and still have HA because the replica is running on another machine anyway (low "disk" latency + high TPS + HA)
    • Why write to a local disk which can crash anytime
    • Why spend so much on expensive SSDs
    • 10GigE is already here
  • Run medium sized JVMs and push all the heavy data to this shared memory filesystem
    • You free up the heap to do simple in-JVM caching and reduce the pressure on GC by moving all the data to /dev/shm
    • If your JVM crashes, another JVM can be notified to pick up that data since it is just stored as a bunch of files and directories
  • People used to do IPC all the time using old fashioned shared memory constructs but it fell out of favor because networks and network drivers became quite fast
    • Also moving away from IPC to TCP-over-localhost gave you freedom to spread your processes across machines (TCP-over-GigE)
    • Perhaps it is now worthwhile to re-examine that approach and shave precious milliseconds by not copying data back and forth between processes over sockets
To that end I wrote a simple Java class to measure the performance of writing to file on /dev/shm compared to a regular disk based file on /tmp. The results are interesting and I'm hoping will make you readers re-think about your software systems ("Everything old is new ...").

The program is a simple Java file writer that forces a flush every few tens of bytes (which means nothing when writing to /dev/shm). The destination file and its path can be specified as a command line argument. So, it's easy to compare performance. I ran these tests on a Cloudera-Ubuntu VMWare image running on my 64-bit Windows 7 laptop with 4GB RAM and 2.3 GHz duo core Intel i5. In-memory is not surprisingly 7x faster for a 112KB file. Also, laptop was running on batteries means processor speed steps down to save power.


Why people do not talk about this relatively new Linux feature out loud is perplexing.

Other interesting commands you can run against this file system:
  • mkdir
  • cat
  • tail, grep
  • ipcs -a
  • df -k
Follow up articles:
Detailed log:


Until next time,
Ashwin.

Tuesday, September 13, 2011

Offloading data from the JVM heap (a little experiment)

Last time, I wrote about the possibility of using Linux shared memory to offload cacheable/reference data from the JVM. To that end I wrote a small Java program to see if it was practical. The results were better (some even stranger) than I had expected.

Here's what the test program does:

  • Create a bunch of java.nio.ByteBuffers that add up to 96MB of storage
  • Write ints starting from the first buffer, all the way to the last one - that's writing a total of 96MB of some contrived data
  • For each test, the buffer creation, writing and deletion is done 24 times (JIT warm up)
  • For each such test iteration, measure the memory (roughly) used in the JVM heap, the time taken to create those buffers and the time taken to write 96MB of data
  • Obviously, there are things here that sound fishy to you - like why use ByteBuffers instead of just writing to an OutputStream or why write to the buffers in sequence. Well, my intentions were just to get a ballpark figure as to the performance and the viability of moving data off the JVM heap
About the test:
  • There are really 5 different ways to create the buffers. Then there are 2 variations of these tests in which the buffer sizes vary (blocks), but the total bytes written are the same
  • The buffers (blocks) for each variation are created as:
    • Ordinary HeapByteBuffers inside the JVM heap itself - as a baseline for performance
    • DirectByteBuffers
    • A file created on Ext4fs using RandomAccessFile and parts of the file are memory mapped using the FileChannel. The file is opened in "rw" mode. Other options are "rwd" and "rws"
    • The same as above but the file resides in /dev/shm the in-memory based, shared memory virtual file system (Tmpfs)
    • The buffers are created using Apache's Tomcat Native Libraries which in turn use Apache Portable Runtime libraries. The Shared memory (Shm) feature was used to create the buffers. This is similar to DirectByteBuffers but the buffers reside in a common area, in OS memory and not owned by any but shared between processes (Similar to /dev/shm but without the filesystem wrapper overhead)
  • The machine used to test was my moderately powered Windows 7 home laptop with 8GB RAM, 2.3GHz i5 running a Cloudera Ubuntu Linux VMWare Player. There were a few other processes running, but nothing that was using CPU extensively. 500MB+ memory was free and available
  • The VM had 1GB RAM and the JVM heap was 256MB
Results:
  • The test program was run once for each configuration, but each test itself ran 24 times to allow the JIT to warmup and even the file system caches to stay warm where needed
  • The test prints out the timings with headers which were then compiled into a single text file and then analyzed in RStudio

summary_mem_used_time_taken_millis

  block_size                      test_type perctile95_buffer_create_and_work_time_millis perctile95_mem_bytes
1       4096                         direct                                       1555.65              3047456
2       4096 file_/dev/shm/mmap_test.dat_rw                                        661.70              3047632
3       4096          file_mmap_test.dat_rw                                       2055.75              3047632
4       4096                           heap                                       1071.15            102334496
5    4194304                         direct                                        653.85                 3008
6    4194304 file_/dev/shm/mmap_test.dat_rw                                        561.40                 3184
7    4194304          file_mmap_test.dat_rw                                       3878.25                 3184
8    4194304                           heap                                       1064.80            100664960
9    4194304                            shm                                        678.40                 2496







Interpretation of the results:
  • The test where block size was 4KB had quite a lot of memory overhead for the non-Java-heap ByteBuffers. Memory mapping was also slow for these small sizes as the Javadocs itself says for FileChannel.map()
  • The JVM heap test was slower than I had expected (for larger ByteBuffers). I was expecting that to be to fastest. Perhaps it was the small memory (1GB) virtualized OS it was running in. For smaller block sizes approaching 1K or less, the JVM heap performance is unbeatable. But in these tests, the focus was on larger block sizes
  • The Apache shared memory test would not even start for the 4KB tests as it would complain about "not enough space"
  • Almost everything fared well in the larger 4MB test. The per-block overhead was less for the off-heap tests and also the performance was nearly identical for /dev/shm, Apache Shm and DirectByteBuffer
Sources:
  • The sources for this test are available here.
  • To run all the tests except Apache Shm you only need to compile JavaBufferTest and run it with the correct parameters
  • To run all tests, you can use the sub-class AprBufferTest which can test Apache Shm and also the remaining tests. To compile this you'll need tomcat-coyote.jar from apache-tomcat-7.0+. To run this you'll need the Jar file and the Tomcat Native bindings - tcnative.dll or libtcnative for Linux
There are advantages to using ByteBuffer outside the Java heap:
  • The mapped file or the shared memory segments can outlive the JVM's life span. Another process can come in and attach to it and read the data
  • Reduces the GC pressure
  • Zero-copy transfers are possible to another file, network or device using FileChannel.transferTo()
  • Several projects and products have used this approach to host large volumes of data in the JVM
Disadvantages:
  • The data has to be stored, read and written using primitives to the ByteBuffer - putInt(), putFloat(), putChar() etc
  • Java objects cannot be read/written like in a simple Java program. Everything has to be serialized and deserialized back and forth from the buffers. This adds to latency and also makes it less user friendly
Misc notes and references:
  • These tests can also be run on Windows, except for the Linux Tmpfs tests. However a RAM Drive can be used to achieve something similar

Ashwin.

Saturday, January 13, 2007

Following up on what I wrote on Marco's blog, StreamCruncher now supports the Solid BoostEngine, which is a dual-engine Database. Dual-engine means that it supports both In-memory Tables and Disk-based Tables. All the Streams (Input and Output) created via StreamCruncher are created on the Memory Engine and the Queries can combine data from both Disk-based and Memory Tables.

The ReStockAlertTest in the examples, is a perfect example of this; where it combines the "stock_level" Disk-based table (default in Solid) and the "test_str" Stream defined on the In-memory "test" Table. StreamCruncher creates its artifacts using the "STORE MEMORY" clause.

Something similar is done for MySQL Databases, where SC creates artifacts using the "engine = MEMORY" clause (MySQL is a multi-engine DB).

For both Solid and MySQL, SC transparently adds this special clause to the Input and Output Table/Stream definitions.

Friday, February 24, 2017

Spring 2017 tech reading

Hello and a belated happy new year to you! Here's another big list of articles I thought was worth sharing. As always thanks to the authors who wrote these articles and to the people who shared them on Twitter/HackerNews/etc.

Distributed systems (and even plain systems)

Tuning

SQL lateral view

Docker and containers

Science and math

Golang

Java streams and reactive systems

Java Lambdas

Just Java

General and/or fun

Until next time!

Tuesday, January 05, 2010

SQLite talk - SELECT * FROM SQLite_internals

I haven't used SQLite but I've used and still like H2Database a lot. SQLite is the grand daddy of embedded/in-process databases. Just look at its user base - almost all the smartphone products, Mac, Adobe, aviation, automation...

This talk by its author Richard Hipp is entertaining and informative - Hipp: SELECT * FROM SQLite_internals.

What amused me was when he said SQLite is robust against malloc() failures and there's a setting that avoids calling malloc(). So SQLite manages memory on its own because (without a real memory manager - *ahem* like Garbage Collectors) memory becomes fragmented and then malloc() fails which is not a good thing when running aviation systems.
[Update: Jan 6, 2010 - So SQLite manages memory on its own because the Operating System's memory manager most often does not work well for long running applications, which I had written about earlier too - in support of *ahem* Mark-Compact or Mark-Relocate Garbage Collectors]

It supports Full text search, R-Trees, Thread-safe - pretty cool stuff for a 750KB binary.

Sunday, August 20, 2017

2017 late summer tech reading



Hi folks, here's some late summer reading if you are not busy watching TV!

Java

Misc tech

Distributed systems

Misc fun stuff

Until next time! Ashwin Jayaprakash.

Monday, May 12, 2008

Memory - don't forget it hurts

I as a Java programmer have often wondered, don't long running programs/servers written in C/C++ have issues with memory allocation at all? I'm not talking about very special programs that you really wouldn't care writing in any language other than Assembly like - bit transferring or blitting or something low really level. I'm talking about medium to large scale applications with lots of business logic, decent concurrency requirements ... the kind of programs Java and .Net are used widely for.

It turns out that leaving memory allocation completely to the programmer is a pain in the backside. Without a managed runtime, any good engineering team using C/C++ ends up either developing their own memory allocator or buying one, because most of the allocators that come with the standard compiler kits have problems with scalability on multiple-cores/cpus/threads and over a period of time the heaps become fragmented.

And this is not just ordinary software - I'm talking about Video Games written in C/C++. It's interesting to read this - GameDevBlog: STL & Memory allocation on consoles and about efforts going on to make FireFox 3 better.

Now Java, on the other hand, which is supposedly (to the semi-ignorant folk) slow has a great choice of allocators - many of which work well on multicore platforms. The other advantage (depends on which way you look at it) is that fragmentation is less of a problem with "Copy collectors". And with the new Generational GCs rapid, consecutive allocation of temporary objects is actually very fast in Java. Plus there are other GC settings to choose from. Agreed, there is plenty of room for improvement - which is why Sun is working on the G1 Collector and let's not forget the Real-time JVM.

Before you go, here are a few more articles to read in your next coffee break. Just to understand how well Java's HotSpot has improved over the years:

# Java running faster than C

# Compiler Intrinsics

Wednesday, August 12, 2015

Summer 2015 tech reading and goodies

Java:
Go:
Graph and other stores:
  • http://www.slideshare.net/HBaseCon/use-cases-session-5
  • http://www.datastax.com/dev/blog/tales-from-the-tinkerpop
  • TAO: Facebook's Distributed Data Store for the Social Graph
    (snippets)
    Architecture & Implementation
    All of the data for objects and associations is stored in MySQL. A non-SQL store could also have been used, but when looking at the bigger picture SQL still has many advantages:
    …it is important to consider the data accesses that don’t use the API. These include back-ups, bulk import and deletion of data, bulk migrations from one data format to another, replica creation, asynchronous replication, consistency monitoring tools, and operational debugging. An alternate store would also have to provide atomic write transactions, efficient granular writes, and few latency outliers
  • Twitter Heron: Stream Processing at Scale
    (snippets)
    Storm has no backpressure mechanism. If the receiver component is unable to handle incoming data/tuples, then the sender simply drops tuples. This is a fail-fast mechanism, and a simple strategy, but it has the following disadvantages:
    Second, as mentioned in [20], Storm uses Zookeeper extensively to manage heartbeats from the workers and the supervisors. use of Zookeeper limits the number of workers per topology, and the total number of topologies in a cluster, as at very large numbers, Zookeeper becomes the bottleneck.
    Hence in Storm, each tuple has to pass through four threads from the point of entry to the point of exit inside the worker proces2. This design leads to significant overhead and queue contention issues.
    Furthermore, each worker can run disparate tasks. For example, a Kafka spout, a bolt that joins the incoming tuples with a Twitter internal service, and another bolt writing output to a key-value store might be running in the same JVM. In such scenarios, it is difficult to reason about the behavior and the performance of a particular task, since it is not possible to isolate its resource usage. As a result, the favored troubleshooting mechanism is to restart the topology. After restart, it is perfectly possible that the misbehaving task could be scheduled with some other task(s), thereby making it hard to track down the root cause of the original problem.
    Since logs from multiple tasks are written into a single file, it is hard to identify any errors or exceptions that are associated with a particular task. The situation gets worse quickly if some tasks log a larger amount of information compared to other tasks. Furthermore, an unhandled exception in a single task takes down the entire worker process, thereby killing other (perfectly fine) running tasks. Thus, errors in one part of the topology can indirectly impact the performance of other parts of the topology, leading to high variance in the overall performance. In addition, disparate tasks make garbage collection related-issues extremely hard to track down in practice.
    For resource allocation purposes, Storm assumes that every worker is homogenous. This architectural assumption results in inefficient utilization of allocated resources, and often results in over-provisioning. For example, consider scheduling 3 spouts and 1 bolt on 2 workers. Assuming that the bolt and the spout tasks each need 10GB and 5GB of memory respectively, this topology needs to reserve a total of 15GB memory per worker since one of the worker has to run a bolt and a spout task. This allocation policy leads to a total of 30GB of memory for the topology, while only 25GB of memory is actually required; thus, wasting 5GB of memory resource. This problem gets worse with increasing number of diverse components being packed into a worker
    A tuple failure anywhere in the tuple tree leads to failure of the entire tuple tree . This effect is more pronounced with high fan-out topologies where the topology is not doing any useful work, but is simply replaying the tuples.
    The next option was to consider using another existing open- source solution, such as Apache Samza [2] or Spark Streaming [18]. However, there are a number of issues with respect to making these systems work in its current form at our scale. In addition, these systems are not compatible with Storm’s API. Rewriting the existing topologies with a different API would have been time consuming resulting in a very long migration process. Also note that there are different libraries that have been developed on top of the Storm API, such as Summingbird [8], and if we changed the underlying API of the streaming platform, we would have to change other components in our stack.
Misc:
Until next time!

Saturday, October 10, 2015

Late summer 2015 tech reading

This should keep you busy for a few weekends.

(Once again, thanks to all the people who shared some of these originally on Twitter, Google+, HackerNews and other sources)

Java/Performance:

Java Bytecode Notes:
Java 8/Lambdas:
Tech Vids:
Data:
Misc:
Some old notes on SQL Cubes and Rollups:
Until next time!

Monday, October 31, 2011

Garbage collection, memory, IO and other scary stories

Happy Halloween! Want to read some scary, low-level, systems related stuff? Here's a short list of very useful articles I read recently:

SQL anti-patterns.

GC horror stories in the .Net world.

Still not scared? How about some math and statistics:
Boo!
Ashwin.

Sunday, December 13, 2020

Xmas 2020 tech reading

Hi there! Here's some tech reading for your X'mas break (As usual, a hat tip to Youtube, Hacker News and Twitter feeds, which are my usual sources). Happy Holidays!

Tag(s)                  Link
cloudThe Google Disease Afflicting AWS - Last Week in AWS
dataA Production Quality Sketching Library for the Analysis of Big Data
dataA Deep Dive into Spark SQL's Catalyst Optimizer (Cheng Lian + Maryann Xue, DataBricks) - YouTube
dataAnnouncing InfluxDB IOx - The Future Core of InfluxDB Built with Rust and Arrow | InfluxData
dataApache Pulsar @Splunk
dataAutomatic Clustering at Snowflake
dataDatadog on Kafka - YouTube
dataFrom "Secondary Storage" To Just "Storage": A Tale of Lambdas, LZ4, and Garbage Collection - Honeycomb
dataMigrating from Druid to Next Gen OLAP on ClickHouse: eBay's Experience - YouTube
dataMoving from Lambda and Kappa Architectures to Kappa+ at Uber - Roshan Naik - YouTube
dataQuery Optimization at Snowflake (Jiaqi Yan, SnowflakeDB) - YouTube
dataReal-Time Metrics at Fortnite Scale - Ricky Saltzer - YouTube
dataTempo: A game of trade-offs
datasled and rio modern database engineering with io_uring - YouTube
data,statsHow to measure anything - Doug Hubbard - YouTube
funI Just Hit $100k/yr On GitHub Sponsors! (How I Did It) | Caleb Porzio
funJessica Kerr - Keynote: The Origins of Opera & the Future of Programming - YouTube
funWhy JSON isn't a Good Configuration Language - Lucidchart
generalBlack Hat USA 2018 Mental Health Hacks Fighting Burnout, Depression and Suicide in the Hacker Commun - YouTube
generalBurnout - When Your Mind is Tired - Jan Altenberg, Continental Automotive GmbH - YouTube
generalDevelopers And Depression | Greg Baugues | Talks at Google - YouTube
generalFOSDEM 2020 - Recognising Burnout
generalFeeling good | David Burns | TEDxReno - YouTube
generalGOTO 2019 • Depression and Burnout: the Hardest Refactor I’ve ever done • Jérôme Petazzoni - YouTube
generalLISA14 - Burnout and Ops - YouTube
golangGo Systems Conf SF 2020 - YouTube
golangManual Memory Management in Go using jemalloc - Dgraph Blog
javaContinuous Monitoring With JDK Flight Recorder (JFR) - YouTube
javaFast, standalone CLI applications with GraalVM Native Image | graalvm
javaFix Memory Issues in Your Java Apps | by Chi Wang | Oct, 2020 | Salesforce Engineering
javaGarbage? Blog - Metaspace in OpenJDK 16
javaJamie Coleman — Microservices made easy with MicroProfile, OpenJ9, Open Liberty and OpenShift - YouTube
javaProject Loom: Scalable, Harmonious Concurrency for the Java Platform - YouTube
javaSailing Java 15 - Piotr Przybył - YouTube
javaTaming Metaspace: a look at the machinery, and a proposal for a better one | FOSDEM 2020 - YouTube
javaTrustin Lee — Armeria: A microservice framework well-suited everywhere - YouTube
javaTrustin Lee — Writing a Java library with better experience - YouTube
javaWhat's New in IntelliJ IDEA - 2020.3
javaWhy I Wrote A Logging Library · Terse Systems
javafoojay – a place for friends of OpenJDK
java,systemJiří Holuša — Intel Optane DC and Java: Lessons learned in practice - YouTube
k8sA Walk Through the Kubernetes UI Landscape - Joaquim Rocha, Kinvolk & Henning Jacobs, Zalando SE - YouTube
k8sDatadog on Kubernetes Monitoring - YouTube
k8sDebugging apps running in Kubernetes An overview of the tooling available - YouTube
k8sEphemeral Environments For Developers In Kubernetes - YouTube
k8sFive Hundred Twenty-five Thousand Six Hundred K8s CLI’s - Phillip Wittrock & Gabbi Fisher, Apple - YouTube
k8sIn Search Of A `kubectl blame` Command - Nick Santos, Tilt - YouTube
k8sKubernetes Network Models (why is this so dang hard?) - Speaker Deck
k8sOpen Policy Agent: Unit Testing Gatekeeper Policies | Dustin Specker
k8sScaling Fleet and Kubernetes to a Million Clusters
k8sValidating Kubernetes YAML for best practice and policies
k8sWebinar: Kubernetes and Networks: Why is This So Dang Hard? - YouTube
k8siptables: How Kubernetes Services Direct Traffic to Pods | Dustin Specker
meshDo I Need an API Gateway if I Use a Service Mesh? – Software Blog
meshGetting started with a service mesh starts with a Gateway | by Christian Posta | ITNEXT
meshIstio as an Example of When Not to Do Microservices – Software Blog
meshUsing NATS to Implement Service Mesh Functionality, Part 4: Load Balancing and Routing Control | by Dale Bingham | Medium
observability,rustProduction-Grade Logging in Rust Applications | by Ecky Putrady | Better Programming | Nov, 2020 | Medium
rustFor Complex Applications, Rust is as Productive as Kotlin
rustOptimizing Benchpress
rust,golangRust vs Go — Bitfield Consulting
rust,k8sKubelet Deep Dive: Writing a Kubelet in Rust - Kevin Flansburg, Moose Consulting - YouTube
stats,dataAndrey Akinshin - Performance Testing - Dotnetos Conference 2019 - YouTube
stats,dataStatistical Paradoxes & Logical Fallacies: Don't Believe the Lies your Data Tells
systemAutomate your workflows with Kotlin Forget everything about bash and perl! - YouTube
systemCooperative Multithreading · Hazelcast Jet
systemDesigning an ultra low-overhead multithreading runtime for Nim Exposing fine-grained parallelism fo… - YouTube
systemHow io_uring and eBPF Will Revolutionize Programming in Linux - ScyllaDB
systemIntroducing Big Memory Computing, MemVerge, and Memory Machine Software - YouTube
systemJsonptr: Using Wuffs’ Memory-Safe, Zero-Allocation JSON Decoder | nigeltao.github.io
systemMonitor Kafka Consumer Group Latency with Kafka Lag Exporter | @lightbend
systemQueryable Logging with Blacklite · Terse Systems
systemSREcon19 Europe/Middle East/Africa - Fault Tree Analysis Applied to Apache Kafka - YouTube
systemSloc Cloc and Code - What happened on the way to faster Cloc | Ben E. C. Boyter
systemTokio - Making the Tokio scheduler 10x faster
systemTokio - Reducing tail latencies with automatic cooperative task yielding

Until next time!

Saturday, October 31, 2020

Halloween 2020 tech reading

Hi there! Here's some tech reading for your Halloween weekend (As usual, a hat tip to Hacker News and Twitter feeds, which are my usual sources).

Tag(s)                  Link
containerDistributed HPC Applications with Unprivileged Containers - YouTube
containerExtending and embedding: containerd project use cases A 2020 containerd project update and descript… - YouTube
dataApache StreamPipes – Flexible Industrial IoT Management - YouTube
dataClickHouse and the Magic of Materialized Views - YouTube
dataDataStax Astra: How We Built a Cassandra-as-a-Service (Jim McCollom & Jeff Carpenter, DataStax) - YouTube
dataDeep Dive: Cortex: 1.0 and Beyond! - Goutham Veeramachaneni, Grafana Labs - YouTube
dataDuckDB An Embeddable Analytical Database - YouTube
dataFlinkNDB : Skyrocketing Stateful Capabilities of Apache Flink - YouTube
dataHandling Variable Time Series Efficiently in ClickHouse – ClickHouse Software And Services | Altinity
dataLow-Latency Stream Processing with Jet - YouTube
dataLumoSQL - Experiments with SQLite, LMDB and more SQLite is justly famous, but also has well-known l… - YouTube
dataNicholas Schrock: Dagster - An open source Python library for building data applications at Crunch - YouTube
dataPolyglot ClickHouse--SF ClickHouse September 2020 Meetup - YouTube
dataPostgreSQL vs. Oracle: Difference in Costs, Ease of Use & Functionality : PostgreSQL
dataPromCon Online 2020 - TSDB WTF, Ian Billett, Improbable - YouTube
dataPrometheus Deep Dive - Ben Kochie, GitLab - YouTube
dataRockset: Realtime Indexing for Fast Queries on Massive Semi-structured Data (Dhruba Borthakur) - YouTube
dataShrinking BSON Documents | Richard Startin’s Blog
dataSolrCloud in Public Cloud: Scaling Compute Independently from Storage - Salesforce - YouTube
dataThings we learned about sums | Time series data, faster
dataUse cases and optimizations of IoTDB - YouTube
dataWhy StreamSQL moved from Apache Kafka to Apache Pulsar | by Simba Khadder | StreamNative | Medium
dataZedstore- Compressed Columnar Storage for Postgres - Soumyadeep Chakraborty & Alexandra Wang, VMware - YouTube
datadqlite: High-availability SQLite An embeddable, distributed and fault tolerant SQL engine - YouTube
funEclipse Theia vs Che vs VS Code - YouTube
funFast Searching with ripgrep — Marius Schulz
funIf Hemingway Wrote JavaDocs - YouTube
funMaintaining an open source project is a lot more than just writing code
gitopsCode to Production - Kubernetes with Tekton and GitOps - Mario Vázquez & Ryan Cook, Red Hat - YouTube
gitopsGitOps Practitioner Highlight: Javeria Khan (Palo Alto Networks) - YouTube
javaA Comparative Review of Microservice Frameworks - YouTube
javaApache Arrow and Java: Lightning Speed Big Data Transfer
javaByteBuffers are dead, long live ByteBuffers! - YouTube
javaContract-driven development with OpenAPI 3 and Vert.x | DevNation Tech Talk - YouTube
javaDataStax Examples: A Comparison of Java Frameworks - YouTube
javaFun with Java Records - Benji's Blog | Benji's Blog -
javaIn-Memory Computing Essentials for Java Developers and Architects - YouTube
javaJDK 15
javaJava after Eleven | DevNation Day 2020 - YouTube
javaJava's Transformation in the Cloud-Native Era - Alibaba Cloud Community
javaModern Java toys that boost productivity, from type inference to text blocks
javaThe Path Towards Spring Boot Native Applications - YouTube
javaTypeScript, client-side views and endpoints in Vaadin - Q&A | Vaadin
javaZGC: The Next Generation Low-Latency Garbage Collector - YouTube
java,allocationAirlift slice - Memory allocator used in Presto
java,allocationDataSketches Memory
java,allocationMemory management in LWJGL 3
java,allocationNetty.docs: Using as a generic library
java,cloudJava and AWS Lambda in 2020 - Cold Starts and More - YouTube
java,golangPeter Nagy, Mark Nelson Can Java microservices be as fast as Go - YouTube
java,securityImplementing Microservices Security Patterns and Protocols with Spring Security - YouTube
k8s(Kubernetes as a Service) as a Service | Pachyderm
k8sGo? Bash! Meet the Shell-operator - Andrey Klimentyev & Dmitry Stolyarov, Flant - YouTube
k8sIntroducing kubectl flame: Effortless Profiling on Kubernetes | by Eden Federman | Aug, 2020 | Medium
k8sKubernetes operators in Python with Kopf | DevNation Day 2020 - YouTube
k8sMeet faasd. Look Ma’ No Kubernetes! - Alex Ellis, OpenFaaS Ltd - YouTube
k8sPast, now and future about Apache YuniKorn (incubating): Cloud-Native resource scheduler - YouTube
k8sThe Almighty Pause Container - Ian Lewis
k8s,javaJava to Kubernetes faster and easier | DevNation Day 2020 - YouTube
kotlin,grpcNext Level gRPC With Kotlin and Coroutines - Marco Ferrer, OfferUp - YouTube
rust,systemDeserializing JSON really fast
systemA Google Cloud support engineer solves a tough DNS case | Google Cloud Blog
systemDeveloping IoT Edge - YouTube
systemGuix: Unifying provisioning, deployment, and package management in the age of containers - YouTube
systemUsing Eclipse IoT Packages – Experience from Eclipse Kuksa and Edge Deployments - YouTube
workflowGOTO 2019 • 3 Common Pitfalls in Microservice Integration & How to Avoid Them • Bernd Rücker - YouTube
workflowIntroducing the Flowable Process Engines by Paul Holmes Higgin & Joram Barrez - YouTube

Until next time!