So close to South Bay Area. Prospect Road trail is a little steep but short. Worth visiting again because there are so many trails.
Saturday, September 24, 2011
Tuesday, September 13, 2011
Monday = one of the days of the week when you forget if you had coffee and have to look in the trash can for the discarded coffee cup, just to check.
Offloading data from the JVM heap (a little experiment)
Last time, I wrote about the possibility of using Linux shared memory to offload cacheable/reference data from the JVM. To that end I wrote a small Java program to see if it was practical. The results were better (some even stranger) than I had expected.
Here's what the test program does:
- Create a bunch of java.nio.ByteBuffers that add up to 96MB of storage
- Write ints starting from the first buffer, all the way to the last one - that's writing a total of 96MB of some contrived data
- For each test, the buffer creation, writing and deletion is done 24 times (JIT warm up)
- For each such test iteration, measure the memory (roughly) used in the JVM heap, the time taken to create those buffers and the time taken to write 96MB of data
- Obviously, there are things here that sound fishy to you - like why use ByteBuffers instead of just writing to an OutputStream or why write to the buffers in sequence. Well, my intentions were just to get a ballpark figure as to the performance and the viability of moving data off the JVM heap
- There are really 5 different ways to create the buffers. Then there are 2 variations of these tests in which the buffer sizes vary (blocks), but the total bytes written are the same
- The buffers (blocks) for each variation are created as:
- Ordinary HeapByteBuffers inside the JVM heap itself - as a baseline for performance
- DirectByteBuffers
- A file created on Ext4fs using RandomAccessFile and parts of the file are memory mapped using the FileChannel. The file is opened in "rw" mode. Other options are "rwd" and "rws"
- The same as above but the file resides in /dev/shm the in-memory based, shared memory virtual file system (Tmpfs)
- The buffers are created using Apache's Tomcat Native Libraries which in turn use Apache Portable Runtime libraries. The Shared memory (Shm) feature was used to create the buffers. This is similar to DirectByteBuffers but the buffers reside in a common area, in OS memory and not owned by any but shared between processes (Similar to /dev/shm but without the filesystem wrapper overhead)
- The machine used to test was my moderately powered Windows 7 home laptop with 8GB RAM, 2.3GHz i5 running a Cloudera Ubuntu Linux VMWare Player. There were a few other processes running, but nothing that was using CPU extensively. 500MB+ memory was free and available
- The VM had 1GB RAM and the JVM heap was 256MB
- The test program was run once for each configuration, but each test itself ran 24 times to allow the JIT to warmup and even the file system caches to stay warm where needed
- The test prints out the timings with headers which were then compiled into a single text file and then analyzed in RStudio
summary_mem_used_time_taken_millis block_size test_type perctile95_buffer_create_and_work_time_millis perctile95_mem_bytes 1 4096 direct 1555.65 3047456 2 4096 file_/dev/shm/mmap_test.dat_rw 661.70 3047632 3 4096 file_mmap_test.dat_rw 2055.75 3047632 4 4096 heap 1071.15 102334496 5 4194304 direct 653.85 3008 6 4194304 file_/dev/shm/mmap_test.dat_rw 561.40 3184 7 4194304 file_mmap_test.dat_rw 3878.25 3184 8 4194304 heap 1064.80 100664960 9 4194304 shm 678.40 2496
Interpretation of the results:
- The test where block size was 4KB had quite a lot of memory overhead for the non-Java-heap ByteBuffers. Memory mapping was also slow for these small sizes as the Javadocs itself says for FileChannel.map()
- The JVM heap test was slower than I had expected (for larger ByteBuffers). I was expecting that to be to fastest. Perhaps it was the small memory (1GB) virtualized OS it was running in. For smaller block sizes approaching 1K or less, the JVM heap performance is unbeatable. But in these tests, the focus was on larger block sizes
- The Apache shared memory test would not even start for the 4KB tests as it would complain about "not enough space"
- Almost everything fared well in the larger 4MB test. The per-block overhead was less for the off-heap tests and also the performance was nearly identical for /dev/shm, Apache Shm and DirectByteBuffer
- The sources for this test are available here.
- To run all the tests except Apache Shm you only need to compile JavaBufferTest and run it with the correct parameters
- To run all tests, you can use the sub-class AprBufferTest which can test Apache Shm and also the remaining tests. To compile this you'll need tomcat-coyote.jar from apache-tomcat-7.0+. To run this you'll need the Jar file and the Tomcat Native bindings - tcnative.dll or libtcnative for Linux
- The mapped file or the shared memory segments can outlive the JVM's life span. Another process can come in and attach to it and read the data
- Reduces the GC pressure
- Zero-copy transfers are possible to another file, network or device using FileChannel.transferTo()
- Several projects and products have used this approach to host large volumes of data in the JVM
- HBase-3455
- Details of HBase Slab allocator
- Cassandra-2252
- The data has to be stored, read and written using primitives to the ByteBuffer - putInt(), putFloat(), putChar() etc
- Java objects cannot be read/written like in a simple Java program. Everything has to be serialized and deserialized back and forth from the buffers. This adds to latency and also makes it less user friendly
- These tests can also be run on Windows, except for the Linux Tmpfs tests. However a RAM Drive can be used to achieve something similar
Ashwin.
Tuesday, September 06, 2011
Blue Lakes and Clear Lake (CA's biggest natural lake)
Clear Lake in Kelseyville is a nice and peaceful place to visit and spend a weekend. Unlike Tahoe this feels cozier, less commercialized and is close to Napa. You can rent lake front property/spacious homes, relax and unwind. It is also the largest natural freshwater lake entirely in California (Tahoe is not because it spreads into Nevada).
Blue Lakes is a smaller lake about 45 minutes from Clear Lake (Soda Bay) and is a perfect spot to float around on the lake and spend a warm afternoon with family and friends.
View Trip/05Sep2011 in a larger map
Ashwin.
Thursday, September 01, 2011
RAM disk is already in Linux and nobody told you (a.k.a Shared memory for ordinary folk)
The Linux 2.6 Kernel these days comes with an in memory file system. This is really shared memory across processes ("Everything old is new again!").
The beauty of the Linux implementation (like everything else) is that this shared memory looks like a regular file system - /dev/shm (but is really Tmpfs). So, your application - yes even a Java application can use this shared memory system across processes as if it were just writing to a regular file. You can create directories, setup a memory quota for this file system, tail files, change to the directories etc like any other directory, open-write-close files that can be read by other processes. Convenient!
RAM disks are not a new concept. But a RAM disk driver that is built into the kernel adds a totally different level of credibility and trust to the concept.
All the contents of this directory are in memory. Nothing gets written to the disk, no flush, no IO wait times. Naturally, this being in-memory you will loose all the contents when your OS reboots. What possible use can this be to us, I hear you ask? Well well.. where do I begin:
- Memory is cheap. 96GB RAM is quite common these days on server class machines
- You can run a big RDBMS on this file system. Typically these databases are anyway replicated/clustered over the network
- So you can run the entire DB in-memory and still have HA because the replica is running on another machine anyway (low "disk" latency + high TPS + HA)
- Why write to a local disk which can crash anytime
- Why spend so much on expensive SSDs
- 10GigE is already here
- Run medium sized JVMs and push all the heavy data to this shared memory filesystem
- You free up the heap to do simple in-JVM caching and reduce the pressure on GC by moving all the data to /dev/shm
- If your JVM crashes, another JVM can be notified to pick up that data since it is just stored as a bunch of files and directories
- People used to do IPC all the time using old fashioned shared memory constructs but it fell out of favor because networks and network drivers became quite fast
- Also moving away from IPC to TCP-over-localhost gave you freedom to spread your processes across machines (TCP-over-GigE)
- Perhaps it is now worthwhile to re-examine that approach and shave precious milliseconds by not copying data back and forth between processes over sockets
The program is a simple Java file writer that forces a flush every few tens of bytes (which means nothing when writing to /dev/shm). The destination file and its path can be specified as a command line argument. So, it's easy to compare performance. I ran these tests on a Cloudera-Ubuntu VMWare image running on my 64-bit Windows 7 laptop with 4GB RAM and 2.3 GHz duo core Intel i5. In-memory is not surprisingly 7x faster for a 112KB file. Also, laptop was running on batteries means processor speed steps down to save power.
Why people do not talk about this relatively new Linux feature out loud is perplexing.
Other interesting commands you can run against this file system:
- mkdir
- cat
- tail, grep
- ipcs -a
- df -k
Detailed log:
Until next time,
Ashwin.