Category Archives: 3d

Memento Mori

(Warning: graphic toothiness ahead)

I recently had a couple of wisdom teeth removed. Bottoms. One a horizontal impaction, the other a vertical partial eruption.

At the initial consult, my fantastic oral surgeon gave me a panoramic x-ray, to see how badly the offending teeth were impinging. It turns out I probably should have had them out a decade ago.

But if I had, I would have likely missed a great opportunity. We now live in an age of digital conical X-rays, cheap DVD burners, and 3D printers. I wondered what it would take to turn that collection of medical data into a physical copy of my own skull.

As it turns out, a couple of days of software wrangling and eleven hours of printing later, I had my answer.

Memento mori. And mini-mori!
The big one is life-sized. The little guy was the first test print.

Painful (data) extraction

If I was going to produce a 3D model suitable for printing, the first step was obvious: I’d need to get the data off of the DVD my dentist gave me and into a print-friendly format.

The scanner stores the data on disc as a “stack” of 0.4mm slices. Imagine taking your head to the supermarket and putting it on a deli slicer. Lop off the top couple of centimeters, slice the rest all the way down to the bottom of your chin, stack it up neatly on some brown paper, and there you have it. Here’s a video fly-through of my head, from top to bottom (generated by RadiAnt viewer).

Fortunately my DVD came with a friendly viewing program called i-CAT Vision, supplied by the manufacturer of the cone beam X-ray scanner. It can take that stack of images and turn it into a variety of projections, including an interactive 3D model.

i-CAT Vision in action.
i-CAT Vision in action.

This tool made it very easy to get a sense of the state of my teeth. It can zoom in on any angle and show a 3D model of the scan results. The only trouble is that it offered no obvious way to extract the model itself to any common format.

Fortunately for the medical world (and the rest of us), there’s DICOM. It’s the lingua franca of medical imaging. It’s to medical imaging software what PDF is to e-book readers. There are a bunch of programs out there that can take a stack of image data in DICOM format and reconstruct a model from it. And i-CAT Vision fully supports exporting to DICOM.

Now the bad news: As so often happens with specialized free software, each program is good at one or two things (whatever itch the programmer had to scratch) and terrible at anything else (abysmal user interfaces, abandoned codebases, poor documentation, etc.)

I eventually found InVesalius, a free (as in GPL) DICOM viewer and model creation tool from Brazil. It has a very nice interface for choosing which densities you’re interested in (Bone? Skin? Blood vessels? You pick!) and will export directly to STL (the generally preferred format for conversing with 3D printers). Plus, it was the least crashy of any of the DICOM software I tried.

Clean your teeth

Now that I had a model in a nice STL file, I encountered my second problem. Life (and poor dental hygiene habits) have not been kind to my teeth. I’ve got a ton of fillings, crowns, spikes from root canals, and other foreign metallic strangeness in my teeth. It shows up as bright white on the scan.

Metal Head
Metal Head

See those dark bands around all of that expensive dental work? I believe that happens because of the limited dynamic range of each slice. If you’ve got a lot of bright spots, you won’t have enough bits to provide detail in the shadows. These shadows and diffraction effects end up creating some really weird artifacts in the 3D model.

No, I'm not really biting a stick. I'm biting radio artifacts!
No, I’m not really biting a stick. I’m biting radio artifacts!

While it was possible to clean all of that up a slice at a time in InVesalius, that proved to be extremely time consuming. Since I just wanted to make something to spook my friends with (and not necessarily a completely accurate anatomical reference) I turned to a different tool: Autodesk MeshMixer.

Working with MeshMixer is a bit like modeling in clay. You have a few tools that let you push, stretch, smooth, or otherwise mangle your model. The emphasis is on organic manipulations rather than razor sharp precision. In other words, it’s the perfect tool to touch up a human skull.

It's messy in there. Just like real life.
It’s messy in there. Just like real life.

The inspector tool is a powerful and very fast way to clean up artifacts (like the thousands of disconnected globules hanging around inside my brain, or the dozens of broken meshes that need to be repaired). After a couple of hours of playing with my skull in MeshMixer, I finally had something that looked like it would print well.

And since my X-ray imaging data stops at the top of my eye sockets, I thought adding an organic bowl shape would make for a nice candy dish when printed.

Corporeality

With the model complete, all that was left was to print it. I asked my good friend and 3D printing guru Rich Olson if he’d mind trying to print it out on his Replicator 2. The first couple of test prints looked very promising. After a little more cleanup (adding supports for some overhangs and making the bottom of the model perfectly flat) we decided to try for a full-sized print.

Ten and a half hours later, the results were much better than I’d hoped.

What’s next?

Now that I have my very own mini-me in PLA, I’m thinking of continuing to improve the model. If I remove the jaw, I could print it separately and get a perfect replica of my bottom teeth.  If I make the walls just thick enough, I should be able to cast it in aluminum or maybe bronze. Of course, now I want to go get a full MRI of my body (or at least my brain!) and make more models of various organs.

You can download and remix my skull on Thingiverse.

 

Bullet time lightning

A while back, I took some photos of my spark gap Tesla coil running.

Although I did get some nice shots, I couldn’t help but feel that they don’t quite capture the full experience of a real live lightning machine. While I can’t do much to recreate the visceral smell of ozone and nitrogen compounds formed by the ionizing sparks, or the reverberating whine of the beautiful but deadly spark gap, I did have an idea for bringing another aspect of the lightning show to the interwebs.

3-D lightning! (Be sure to watch it in HD if you have the bandwidth.)

I made a ten-camera array of Canon A470s, and configured them to work as a single 70-megapixel 10-angle camera.

IMG_5035.JPG

Why that particular camera? Partly because I found someone dumping a bunch of them on eBay for cheap, but also because they run CHDK, the infamous scriptable firmware for Canon cameras. This let me write some code to streamline the process of taking ten photos all at once, and then get them off of the cameras in a reasonable manner. By wiring all of them to the same 10-port USB hub, and using CHDK’s syncable USB remote feature, I was able to wire up a single button to make all of the cameras fire at once. Collect all the photos, find all of the good ones that are actually in focus, get them aligned and color balanced and scaled, and away you go. Bullet time lightning.

bullet time lightning

This was one of the more challenging projects I’ve taken on in a while. I had to build a physical mount to hold all of the cameras, wire them together to a repurposed PC power supply, recompile CHDK to eliminate as many unnecessary camera keystrokes as possible, write some scripts to facilitate taking and retrieving the photos, then shoot the actual photos without accidentally frying the whole rig. And, of course, build and operate the Tesla coil itself, edit together the stills, and compile the whole thing into a possibly entertaining vid.

I want to take a lot more footage with this camera, but I also wanted to release the results as soon as I could. So here you are.

Aside from Tesla shots, what would you shoot if you had a bullet-time style camera?

Enjoy!