Inspiration

At the company holiday/winter lunch, there was a glass container filled with skittles. I sat with some people from my division and they mentioned how past interns would be measuring the skittle jar and solving how many skittles there were using packing fractions (I just guessed). Not wanting to be outdone by past interns, I told them that next year I will have built an app that would calculate how many skittles there were using just an input picture. HackUMass seemed like the perfect opportunity to make right on this promise.

How it works

Using a picture taken from your phone, this program estimates the number of units (in our case, skittles), in a volume. This will ensure that with any raffle you enter that has this criteria, the end goal is an estimate of contents within a certain tolerance. Using OPENCV, we've created two separate methods to estimate the volume. The first of these utilizes bitmasking color boundaries on a container with aid from user preprocessed images to estimate volume and computing a Riemann's sum vertically with respect to x, called "Poor Man's Computer Vision." The second utilizes OPENCV's native computer image capabilities to a greater extent, by blurring the image using a Gaussian matrix, and using a canny edge detection method to outline the filled volume. The scale(pixel/cm) is calculated by using Hough Circles of the most perfect skittle in the glass and returning that circle's radius. The volume is computed the same as our other method.

Challenges I ran into

-BGR... because OpenCV is too good for RGB -color blending issues -inexperience (1 Freshman, 1 Neuroscience major with no programming skills) -No OpenCV experience -picture resolution -OpenCV is almost impossible to configure for resizeable windows, 4K resolution isn't enough to show these images. -1 mac, 2 linux, and 1 PC consistently caused issues (dependencies, and incompatibilities) -Floobits version control (we should have used git, so many overwriting issues.) ***side note, floobits isn't version control, it's "Everyone-can-overwrite-an-hour-of-someone-elses-work" control

Accomplishments that I'm proud of

Coming into this event, I was the only programmer on my team of four that had any python experience. I would say the fact that my team picked the language up so quickly is certainly an accomplishment. None of us had any experience using computer vision tech coming into this challenge either, and we turned it into the core of our project. Learning how to use computer vision to process the images in such an accurate way is something I would never had thought we could pull off.

What I learned

-Python, Numpy -OPenCV -Patience under pressure (Mostly OpenCV's fault) -Wordpress -building dependencies from source code (openCv, there's a apt-get for that and somehow only the windows guy knew that) -All about packing fractions -IBM BlueMix for our wordpress and text-to-speech -MS Azure Virtual Machines -The Power of Python

What's next for Project Riemann: Calculate the Rainbow

-Perfect our computer vision algorithm and science of packing fractions -Build a data base of known object sizes and packing fractions -Expand our "Ruler-In-Picture" measuring technology -Enable computation to be done on a server and get a result back in email for large photos -Use Kivy to create a cross-platform application that will run on Mac, Windows, Linux, Android, IOS, etc. -Send it to market

Built With

Share this project:

Updates