While starting to convert much of our IO to using nio byte buffers, with an eventual goal in pushing that further up into the application, I decided to investigate in some more detail performance. I’d seen some blog posts that claimed that performance wasn’t great, in particular a very old blog post from 2004. That post included a simple benchmark, which I grabbed, converted to use Int buffers, dropped the count to 10,000,000 int values, and ran it. The source is available as niotest.java. The results weren’t encouraging:
Java Version 1.6 1.7 array put 26 ms 31 ms absolute put 129 ms 130 ms relative put 130 ms 132 ms array get 20 ms 19 ms absolute get 116 ms 119 ms relative get 130 ms 137 ms
As part of that, I decided to clean up the benchmark code and put everything together in a nice package. The source for the new benchmark is ArrayBenchmark.java. Like the original, it works on arrays/buffers of 10,000,000 integers, first writing each element (with just its index) and then reading each element in the get operation. The additional “copy into” benchmarks time how long it takes to copy all the ints into an existing int array. Here are the results:
Java 1.6 Java 1.7 ===== native java int array put: 27.971961 ms 42.464894 ms get: 32.949032 ms 14.826696 ms copy into: 20.069191 ms 15.853778 ms ===== nio heap buffers put: 839.730766 ms 57.876372 ms put (relative): 844.618171 ms 80.951102 ms get: 742.287840 ms 80.578592 ms get (relative): 759.317101 ms 79.563458 ms copy into: 769.494235 ms 91.685437 ms ===== nio direct buffers put: 161.480338 ms 31.951206 ms put (relative): 170.194344 ms 47.541457 ms get: 179.621322 ms 18.913808 ms get (relative): 164.425689 ms 29.387186 ms copy into: 21.940450 ms 16.936357 ms ===== custom buffers put: 151.095845 ms 48.125012 ms put unchecked: 148.538301 ms 51.241096 ms get: 146.243837 ms 36.636723 ms get unchecked: 138.765277 ms 31.641897 ms copy into: 41.643050 ms 20.206091 ms copy into (copyMemory): N/A 16.845686 ms
These numbers show a significant improvement in Java 1.7! Direct buffers are roughly about as fast as regular arrays, which is what I had hoped to see originally. The “custom buffers” section is a hand-rolled integer buffer class that uses Unsafe.getInt/putInt without much of the additional nio buffer machinery or abstractions, to see how much that was contributing to overhead. It’s noticable in Java 1.6, but in Java 1.7 the original nio buffers win handily, even against “unchecked” versions of get/put that don’t do any bounds checking. I also added heap (non-direct) buffers, to see if there was any truth to a claim I read regarding mixing direct and non-direct buffers causing an overall slowdown, because then there would be two implementations of the abstract parent class, and the VM couldn’t optimize the virtual calls. That doesn’t seem to be the case any more — the JIT doesn’t care.
But, I am now very confused why the original benchmark code and the new code give such different results. The normal int ut is down to 42ms, slower than the 31ms in in the original benchmark, and slower still than the 27ms that the same benchmark gets in Java 1.6. The other numbers are all much better though — compare, for example, direct buffer absolute “get” performance — 119ms in the first benchmark, 19ms in the second. This is a 6x speed difference. The same compiler and JVM are used for both. I even added a ‘mixed’ set to the new benchmark, that does the operations in the exact same order as the first one (interlaving operations on arrays and int buffers), and it didn’t matter.
The new benchmark numbers are really encouraging, and mean that we’re going to probably push the nio buffers into many places, simplifying our interaction both with IO, OpenGL, algorithms implemented in JNI, etc. as well as letting us move the bulk of our large data out of the Java heap. However, I’d like to understand why the two benchmarks give such vastly different performance results. I’ve stared at the source for a while, and I’m virtually certain that they’re doing the same operations, on identically-sized arrays. Can someone explain the overall slowness of the first benchmark? Why didn’t the numbers change hardly at all between Java 1.6 and 1.7? Why are the 1.6 numbers in the second benchmark slower than the 1.6 numbers in the first?
The last post here was about leaving Mozilla, and I mentioned a tiny bit about where I was going and what I’ll be working on. Then there was radio silence for a few months. So, what have I been up to?
First off, we launched WebGL 1.0 at GDC 2011. I was serving as chair at the time, though after the launch Ken Russell at Google took over as I was leaving Mozilla. I haven’t been as involved in WebGL (or Mozilla at all, really) over the past few months as I’d like to be, because of a number of new job things and moving. I hope to get back into it as things settle down more, though I’ve been saying that for the past month. Soon!
I spent the first two months in Australia, at the headquarters of DownUnder Geosolutions in happening exotic vibrant sunny Perth. There was a lot of crash-course geophysics instruction going on. I basically inhaled 3-D Seismic Interpretation, and asked lots of dumb questions. I can now reasonably hold a conversation about 2-D, 3-D, and 4-D data sets; TWT vs. TVDSS (not to be confused with TWSS); stacking; horizons; faults; wells and well logs; etc. (At least, I can fake knowing what I’m talking about, which is often all that matters.)
The app itself, DUG Insight is really all about data visualization, and UI to get at facets of the data in a reasonable way. We’re nowhere near where we want to be with the UI, but in many ways we’re already light years ahead of the competition — which often looks something like this.
One of the first things I worked on was adding some 3D visualization to well log data. This is basically data gathered along a well bore by instruments that are either part of the drilling package or are otherwise sent down. It’s often the only “truth” data that you have that’ll tell you exactly what’s down there. When showing this data in 3D, it’s nice to be able to vary the thickness of the cylinder based on the data, to give another visual cue along with an applied colourbar. The result:
There are three wells visible here along with their logs, and some seismic data that I made translucent for the screenshot. The tricky thing here is that the data is quite high frequency, and is often very finely sampled. Very small spikes or troughs can be significant, but normal data minification often just smooths them away. We don’t have this solved in the 3D view yet, but in another view I did some work to attempt to show at least the local maxima that contribute to each pixel:
The right is a zoomed in version of the left data. On the right, we can represent all the data accurately, since we have more pixels available than we have actual samples. On the left, though, more than one data sample contributes to each vertical pixel. The gray bars indicate maximum values for all data points that contribute to each pixel. The difference can be pretty big; here’s a side-by-side render of the same data, one with the gray max values and one without:
There are other options here, and we’re still working on figuring out how to expose them to the user (for example, drawing a line down the average value and drawing a bar from min to max).
Other stuff I’ve worked on so far has been much less visual. The codebase is somewhat old, and was often written for correctness and/or purity, and less so for performance and ease of use. For example, we interpolate lots of data; our current interpolators tend to go through a number of function calls *per sample* and do no caching even though we’ll often interpolate multiple samples between the same two adjacent data lines. This is on the list of things to correct, because, well, babytown frolics. There will also be a lot more 3D visualization work done as soon as our next release ships.
In other news, I also drove cross-country from the San Francisco Bay Area to Toronto, where I’m now living. It should have been a lot more fun, but the various camping/hiking I had planned along the way got cancelled because of bad weather… so I just drove straight through, taking about 4.5 days to do the drive. The Cross-country Road Trip playlist on rdio that friends helped me put together was pretty awesome… listened to it straight through, probably ended the drive with a few hours to go. There’s some great music there, along with a few questionable choices which made me laugh during the trip (rick rolled driving into Colorado; the Oregon State Song came up at some point; etc.).
Last but not least, DUG is hiring in Toronto — if you have some data viz, UI, or rockstar Java chops, send me an email (vladimir at pobox dot com). We have just about every interesting software engineering problem, so there’s a lot of good challenges to tackle. The Toronto team is currently small (just three of us), in a great brick-and-beam building near Spadina and Richmond; it’s a pretty fun environment.
I’ve recently made the decision to leave Mozilla for various reasons, largely because I’ve been wanting to do something different. Here are some thoughts on this.
I became involved in Firefox and the Mozilla community when I was
suckeredtalked into fixing a bug — a bug that involved RDF, bookmarks, and the template builder. (Thankfully, all three of those things — RDF, old bookmarks, template builder — have almost been completely excised from Firefox.) Somehow I ended up sticking around to fix more bugs and get involved in bigger projects, from working with Stuart to rework our graphics engine up to getting the Android port going.
So, what am I doing next? Something entirely different. I’ll be doing software in a totally different industry, joining some friends in bringing some disruptive innovation there. I’ll still be writing software, though with a much smaller team — something that I’ve come to enjoy, as being small and scrappy has a lot of advantages and is a lot of fun. There’s a lot of interesting technical challenges, mainly related to dealing with large volumes of data (multiple terabytes not being uncommon) — processing, visualization, analysis. You’ll also likely see me hacking various bits in my own spare time, whether in Mozilla, web apps, or mobile apps. I plan on continuing to blog about these topics.
For WebGL in particular, I’ll be around to launch the initial version of the spec, and plan on continuing to be involved in the standards group. I might not be hacking on Mozilla’s implementation as frequently, but it’ll be in good hands.
Thank you to all the people that I’ve had a chance to work with and learn from over the past 5 (almost 6!) years. I’ll still be around irc and other forums so won’t be pulling a disappearing act any time soon, and I’m looking forward to seeing Firefox 4 out there!
One of the parts of Firefox 4 that I’m excited about is support for WebGL, a standard for accelerated 3D rendering on the web. We’ve been working on this for quite a while, and I’ve been doing experiments with a similar kind of 3D support for a few years now. With the upcoming release of Firefox 4 Beta 8, WebGL support is starting to firm up.
What is WebGL?
WebGL allows web developers to take advantage of the 3D capabilities of modern video cards to add 3D displays to their web applications. Apps that would have been possible only on the desktop or with plugins become possible in any modern browser that supports WebGL: 3D games, interactive product displays, scientific and medical visualization, shared virtual environments, and 3D content creation all become possible on the web.
For users, this means a more interactive and visually interesting web. It means having access to a wider range of applications and experiences on any device that supports the full web, instead of being limited to specific devices or platforms.
WebGL is being developed within the Khronos Group, the same group responsible for OpenGL and OpenGL ES. Members of the WebGL group include Mozilla, Google, Opera, and Apple, as well as a number of hardware vendors who are interested in making sure that WebGL content can run well on both desktop and mobile hardware. There’s a lot of support for WebGL!
I recently gave a talk at NVidia’s GPU Technology Conference about WebGL. The video stream is available (though sadly not using HTML5 video!), and it’s a good (though technical) overview of WebGL.
Let me check out some demos!
Here’s a couple of demos showcasing WebGL technology by Mozilla, Google, and others.
Some platform-specific notes: WebGL is currently disabled on Linux due to some build issues; it should be getting re-enabled in beta 9. For Windows users, you may need to install the DirectX Runtime in case these demos don’t work for you or if they have glitches — this allows Firefox to use an alternate rendering path that might be better supported on your system, especially on systems with Intel GPUs. We’re working on removing the need for this separate install in a future build.
Check out Dave’s post for more details about the Flight of the Navigator demo. You can find many more projects and demos using WebGL on the web and linked from webgl.org — for example, some other great examples are a web-based 3D editor called 3DTin and a vortex/anti-vortex annihilation simulation,
Give me more technical details!
WebGL brings the OpenGL ES 2.0 API to the HTML5 Canvas element. 3D content is confined to the canvas, but the canvas follows normal HTML compositing rules. For example, a 2D UI can be layered on top of the 3D scene using normal CSS mechanisms, and content underneath the canvas will show through transparent portions. In addition, CSS properties can be applied to the canvas itself, for effects like fading the entire scene in or out.
The WebGL API interacts well with the rest of the web platform; specifically, support is provided for loading 3D textures from HTML images or videos, and keyboard and mouse input is handled using familiar DOM events.
As a developer, how do I learn more about WebGL?
WebGL is based on OpenGL ES 2.0, which just so happens to be the same 3D API used for Android and iOS development, as well as being based on the desktop OpenGL API. Many resources available for ES 2.0 development translate almost directly to WebGL development.
Unlike desktop or mobile OpenGL development, it’s very easy to get started with WebGL. Some simple HTML and JS content lets you immediately start writing WebGL code. A number of tutorials already exist that focus on WebGL; you can take a look at Learning WebGL’s lessons to help you get started.
Here are some web resources with more information:
- webgl.org — official WebGL page, including specification and resource links
- learningwebgl.com — blog with regular updates on WebGL happenings
WebGL focuses on OpenGL ES 2.0 feature compatibility to ensure content compatibility with mobile devices. However, ES 2.0 is behind the latest advances on the desktop today. In the future, various desktop features may become available in WebGL in the form of extensions.
There’s still some work to do on the Firefox side as well, in particular removing some performance bottlenecks on Windows when we’re using ANGLE for Direct3D compatibility.
WebGL will enable web developers to create new experiences for their users. As with any new technology, initial experimentation will lead to developers understanding better how to fully leverage WebGL. There’s already tremendous interest in WebGL, as can be seen by the wealth of frameworks and samples, even before WebGL has been released as part of any final shipping browser version! By including WebGL in Firefox, and along with our work on HTML5 video and audio support (including direct audio data access), Firefox supports a full set of web
technologies for building rich and compelling applications on the web.
I’ve often been frustrated with the lack of a good solution for syncing a photo library amongst multiple computers. Typically, I only have my laptop with me when I take photos, especially on a trip. When I get back home, I want to sort and edit on a grown-up computer. None of the photo management tools seem to have any kind of support for this: Lightroom, Aperture, Bridge, etc. all seem to be designed for single computer usage. Bridge has some allowance for using Adobe’s versioning stuff, but it seemed fairly complex to set up, and it wasn’t clear if it could do what I wanted anyway.
I recently became a fan of Dropbox; it’s fantastic for file sharing and syncing amongst a bunch of computers and phones. (If you decide to try it out, use this referral link so that I get some extra space!) I started wondering if I could use Dropbox to sync the Lightroom catalog. This blog is a set of instructions on how to do just that, and also how to set up your photos so that you can easily migrate them between computers, or keep a backup on one or more hard drives.
Moving your Lightroom catalog to Dropbox
This one’s pretty simple. First, install the latest version of Dropbox 0.8 from their beta forums. 0.8 has one feature that makes this much easier and useful — you can set it to ignore a set of folders for syncing. This will become important in a second.
After you have Dropbox installed, simply take your main Lightroom catalog file (or multiple ones, these instructions work equally well for multiple catalogs) to somewhere inside your Dropbox folder. I have a Lightroom folder inside Dropbox that has all my catalogs.
Open the catalog you just moved; Lightroom should start up, and all your data will be there. However, if you go back to your Dropbox folder, you’ll see that Lightroom created another folder to store previews alongside your catalog. For example, if your catalog is called “Lightroom 3 Catalog”, you should see “Lightroom 3 Catalog Previews.lrdata”. This is the folder that we don’t want sync’d — it can get quite large, and there’s no reason to synchronize it. Open Dropbox preferences (on Windows, this will be from the Dropbox icon in your system tray; I’m not sure how to do it on Mac), select Advanced, and click on Selective Sync. Navigate to where your catalog is, and uncheck the Preview folder.
And that’s it. Your other computers should have gotten a copy of your catalog, and you can open them there and work from the same set of data. However, note that only one computer can have the catalog open at a time. Lightroom creates a lock file which gets sync’d, but you can run into problems if you modify the catalog from two different computers while they’re both disconnected. If you get into this state, you’ll have to do some merging of data just as if you had two independent catalogs. Just get into the habit of closing Lightroom when you’re done with it on any computer and you’ll be fine. (Or, if you’re disconnected for a while with a laptop, only use Lightroom there and connect it to the network before opening up the catalog on any other computers.)
However, this doesn’t get your photos to those other computers. Ideally, you want Lightroom to not care what computer it’s on.
Making your photos available to multiple computers
Note: this section is written with Windows users in mind. If you’re a Mac user, you can do the same thing via symlinks, but I don’t have detailed instructions here.
You can do this by making sure that the photos are always in the same location, no matter what computer you’re on. The simplest way is to make sure they’re always in the same place — for example, “C:\Photos”. That becomes somewhat inflexible, though. For example, you may want to keep all your photos on a large external hard drive, but only keep a local cache of your most recent work to save disk space.
Enter the subst Windows utility. subst lets you assign a drive letter to any location on any drive, and modify the location it points to at will. For example, typing “subst p: d:\storage\photos” in the command prompt will make the P: drive show the contents of D:\Storage\Photos. Not a fan of the command prompt? You may want to use Visual Subst, which provides a nice interface to the same functionality.
With this, you can tell Lightroom that all your photos are under P:\. Create a P: drive that points to where you have your photos stored. Then, open Lightroom, and under Folders on the left side, right click on each folder and select “Update Folder Location”. Navigate to the same folder under P:\, and select OK.
Now, copy all your photo folders to a portable drive. Take that drive to another computer, plug it in, use subst to set up your P:\ drive, and open Lightroom — it should see your photos, oblivious to the fact that it’s on a different computer and getting the photos from a portable drive!
You can change the P:\ mapping at will. For example, I keep my photos sorted by date, with the top level folder being the year. My laptop only has the contents of the 2010 folder. Any earlier ones are backed up on the drive (and in other places). In Lightroom, these other folders show up grayed out with a little ? next to them. If I need to do something with these, I plug in the external drive, change where P: points to, restart Lightroom, and I’m back in business.
Actually synchronizing your photo collection
Now that you have the import and edit photos using the same catalog anywhere, you need a good solution for actually keeping the photo folder contents up to date, and making sure new changes make it to your various backups.
Unfortunately, I don’t have good instructions for this, especially on Windows. On a Mac, you can use rsync, and there’s likely a nice UI for it somewhere On Windows, you’d want to use rsync as well, but I’ve yet to find a version of rsync that seems to work well. Unison is also an option, but I’ve had problems with it as well. Right now I do a somewhat manual job of keeping these photos up to date. Usually it’s not difficult, because I just add photos (so it involves copying a few new by-date folders over), but it gets complicated if I do any edits in Photoshop.
If I come up with a good solution here, I’ll update this post in the future.
That’s it! Enjoy having your Lightroom catalog available everywhere and not being tied to one computer for editing your photos.