Upload pictures of the subject's right and left eye.

Include the subject's age and sex, if possible.


Nearly nine million people die every year due to inadequate healthcare in developing countries.

Half the world’s population lacks access to basic health services2. How can we make solutions in global healthcare that are effective? How can we do things more efficiently, at a lower cost, to help reach more people?

We identified five key areas that an effective solution excels at:

  1. Performance
  2. Cost
  3. Accessibility
  4. Speed
  5. Generality

With our project, we identified a solution that’s all five of these things. Septer is a pre-screening tool that will enable rural healthcare facilities to effectively manage large number of patients and, ultimately, help more people and save more lives. With Scepter, we use neural networks to pre-screen for 7 disease categories. All you need is a smartphone and a magnifying glass.

Scepter uses the human retina - an extremely powerful tissue for diagnosis. It makes for a general solution, enabling us to identify multiple diseases with just one test. To forgo the need of human expertise during this preliminary pre-screening phase, we instead use an app, taking advantage of a convolutional neural network to detect diseases in retinal images.

These diseases are:

  • Diabetes
  • Hypertension
  • Glaucoma
  • Cataracts
  • Age-related Macular Degeneration
  • Pathological Myopia
  • Other Abnormalities

Instruments required for retinal imaging can cost tens of thousands of dollars - they’re often clunky, expensive machines. Instead, we take advantage of a smartphone-lens approach, requiring only that the pupil is already dilated (something that typically requires pupil dilation solution). By placing a 20/28 diopter lens a short distance from the eye and a smartphone camera (with flash on) some 30 cm away from the lens, we capture a retinal image.

We were able to purchase a lens combination with the required power for under 10 dollars. Of course, using this approach yields poorer image quality and disturbances - something we handle via simulation. By using a smartphone, we also immediately have access to a telecommunications device that can perform on-device computations and communicate with a server.

Our Implementation

We took advantage of modern web technology to create an application that was cheap, yet scalable. The mobile app is made in Swift and is accessible from iOS devices. We also made a Progressive Web Application for access from desktop and non-Apple devices. The interface is backed by an API written in Python using the Flask framework and a PostgreSQL database. This cloud-first architecture makes our application accessible from everywhere, while our application's simplicity means operators have a shallow learning curve.

The neural network was trained using the ODIR-5K retina dataset, collected from several hospitals. We wrote scripts to intentionally create glares, lens scratches, varying orientations and resolutions, blurs, and shadows. This enabled us to make a dataset more representative of smartphone-based imaging. With this augmented dataset, we trained a two-branch convolutional neural network to use images of two human retinas to produce a diagnosis. After fine-tuning using a partitioned validation set, we performed tests on a partitioned testing set and achieved (weighted) accuracies of XX%.