A Photo is NOT Worth a Thousand Words

Metadata in Photos #

tl;dr: We are just scratching the surface of capturing the metadata surrounding a photo.

Photos have never really stood well on their own.

In order to see the whole picture😝, some context is often necessary.

This is certainly not a new idea. Looking through a shoebox of photos from when my father was growing up, each snapshot has the date the picture was developed stamped on the back of the picture, usually in addition to a two sentence blurb from my grandmother with some basic details like who is in the picture and where it was taken.

Today with digital photography, this is easier and more common than ever.

Any digital camera worth its salt will magically embed some contextual data into an unseen portion of the image file. This information usually includes the date/time when the picture was taken (or January 1, 1970 if you never got around to setting the clock on your camera), whether the flash was used, possibly even the location where the photo was taken (if the camera has GPS, or the photographer uses one of a few special geotagging iPhone Apps).

Once the photo is posted online, we can use Facebook to tag people in the photos. We can add tags, titles, and descriptions.

What other data can we associate with a photo, preferably automatically (because I’m a lazy photographer)?

Day or Night #

If we know what time the photo was taken, and we know where the photo was taken, it would be quite simple add a “Day” tag if the photo was taken between sunrise and sunset, or a “Night” tag otherwise.

Holidays / Seasons #

Since we already know the date a photo was taken, why not figure out if that date corresponds with a holiday (either in the photographer’s home region, or the region where the photo was taken) and add that information as a tag. It would be nice to easily search online for all photos taken on “New Year’s Eve” or “Memorial Day”.

Weather #

Was it snowing when this picture was taken? Was it the hottest day of the year? Was visibility less than a mile?

I’d love to be able to search for photos of hot days in Minneapolis, or clear sunny photos of the Golden Gate Bridge.

This information could come from one of many different sources.

Weather.gov has a ton of information about weather at airports all over the world, and there are plenty of ways to access this information. Alternatively, the camera could have some basic climate sensors included, or connect to something external like the Node Clima to record this information at precisely the same time and location the photo was taken.

Astronomy #

Using an app on my iPhone, I can point my phone at a constellation and it tells me what constellation I’m looking at. Could there be a way to tag a photo taken of a night sky with stars and constellations that appear in the photo?

What if I could search for photos of a full moon over Chicago?

Tripod Info #

What if my tripod had an RFID or NFC identifier tag, so the camera could tell if a tripod was used to take a picture, and record some information about that specific tripod?

This method could also detect whether the camera was inside an underwater housing.

More & Better Location information #

Is there a way for a camera to tell if it is indoors or outdoors (possibly based on the light source in the photo)? That’d be good information to have.

The camera could make note of which direction it was facing when the photo was taken. Obviously this would require a compass in the camera itself, rather than relying on a phone or something external (my wife says this is not obvious).

Was the picture taken from a moving vehicle? If so, which direction and how fast was the vehicle traveling? This is actually part of the Exif specification already.

Rather than just the latitude/longitude of a photo, how about looking up what is actually at those coordinates and tagging a photo as having been taken in “San Francisco, CA” or, better yet, “Moscone Center West, San Francisco, CA.”

My camera may not know much more than my GPS coordinates, but Foursquare knows exactly where I was at a given time.

When I take a photo with Instagram, it actually lets me add a Foursquare venue to a photo already, but it feels like there is more we could do with this. If a photo was taken at “Jimmy John’s,” we also know it was taken at a “Sandwich Shop,” which is a subcategory of “Food.” If these categories could also be added to a photo, I could look for that quirky coffee shop in Seattle I went to last year with the cool sign, even if I didn’t remember its name or location.

Was this picture of a shark taken in the ocean or at an aquarium? With location tags, it would be easy to find out.

Denouement #

Much of this information is a bit too heavy for the camera to process on its own, or requires data from online services.

Fortunately, this is 2013 and photos aren’t chained to the camera anymore. We live in an age where “If You Didn’t Blog It, It Didn’t Happen.” The photos that I take are immediately uploaded to Flickr&Facebook&Instagram&Twitter&Etc. It would not be difficult for any of these services to figure out some of this additional metadata.

Or perhaps it would be better processed by a service or application that sits between the camera and the photo sharing website, like my magical Eye-Fi memory card, photo management software like Lightroom or iPhoto or — for those of us who are more technically inclined — a few quick and dirty Python scripts.

A picture is worth a thousand words.

A picture only tells part of a story; much contextual information could easily be slurped up from existing sources.

 
1
Kudos
 
1
Kudos

Now read this

HTML Does Not Need Retina Images

Smart people are spending a massive amount of time trying to solve a problem that doesn’t exist. Trying to retrofit HTML to allow for things like quadruple-resolution images for HiDPI monitors, and lower quality images for slower... Continue →