Skip to content

altaga/DBSE-monitor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DBSE-monitor

Drowsines, blind spot and attention monitor for driving or handling heavy machinery. Also detects objects at the blind spot via Computer Vision powered by Pytorch and the Jetson Nano. And has a crash detection feature.

Table of contents

Introduction:

We will be tackling the problem of drowsiness when handling or performing tasks such as driving or handling heavy machinery and the blind spot when driving.

The Center for Disease Control and Prevention (CDC) says that 35% of American drivers sleep less than the recommended minimum of seven hours a day. It mainly affects attention when performing any task and in the long term, it can affect health permanently.

According to a report by the WHO (World Health Organization) (2), falling asleep while driving is one of the leading causes of traffic accidents. Up to 24% of accidents are caused by falling asleep, and according to the DMV USA (Department of Motor Vehicles) (3) and NHTSA (National Highway traffic safety administration) (4), 20% of accidents are related to drowsiness, being at the same level as accidents due to alcohol consumption with sometimes even worse consequences than those.

Also, the NHTSA mentions that being angry or in an altered state of mind can lead to more dangerous and aggressive driving (5), endangering the life of the driver due to these psychological disorders.

Solution:

We will create a system that will be able to detect a person's drowsiness level, this with the aim of notifying the user about his state and if he is able to drive.

At the same time it will measure the driver’s attention or capacity to garner attention and if he is falling asleep while driving. If it positively detects that state (that he is getting drowsy), a powerful alarm will sound with the objective of waking the driver.

Additionally it will detect small vehicles and motorcycles in the automobile’s blind spots.

In turn, the system will have an accelerometer to generate a call to the emergency services if the car had an accident to be able to attend the emergency quickly.

Because an altered psychological state could generate a possible dangerous driving, we take care of the state of the driver by analyzing the emotions of his face and using music that can generate a positive response to the driver.

Current Solutions:

  • Mercedes-Benz Attention Assist uses the car's engine control unit to monitor changes in steering and other driving habits and alerts the driver accordingly.

  • Lexus placed a camera in the dashboard that tracks the driver's face, rather than the vehicle's behavior, and alerts the driver if his or her movements seem to indicate sleep.

  • Volvo's Driver Alert Control is a lane-departure system that monitors and corrects the vehicle's position on the road, then alerts the driver if it detects any drifting between lanes.

  • Saab uses two cameras in the cockpit to monitor the driver's eye movement and alerts the driver with a text message in the dash, followed by a stern audio message if he or she still seems sleepy.

As you can see these are all premium brands and there is not a single plug and play system that can work for every car. This, is our opportunity as most cars in the road are not on that price range and do not have these systems.

Materials:

Hardware:

Software:

Connection Diagram:

This is the connection diagram of the system:

Laptop Test:

To test the code on a computer, the first step will be to have a python environments manager, such as Python Anaconda.

https://www.anaconda.com/distribution/

Environment Creation:

Pytorch

First we will create a suitable enviroment for pytorch.

conda create --name pytorch

To activate the enviroment run the following command:

activate pytorch

In the case of Anaconda the PyTorch page has a small widget that allows you to customize the PyTorch installation code according to the operating system and the python environment manager, in my case the configuration is as follows.

https://pytorch.org/

conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

Dependencies

The other packages we need are the following:

pip install opencv-python matplotlib tqdm python-vlc Pillow

Anyway we attach the file requirements.txt where all packages come in our environment.

Jupyter Notebook

Inside the Drowsiness, Emotion detection and YoloV3 folders, you will find a file "Notebook.ipynb" which contains the code to run the programs in jupyter notebook, however I attach in each folder a file called "notebook.py" with the code in format **. py **.

conda install -c conda-forge notebook

Command to start jupyter notebook

jupyter notebook

Summary and mini demos:

All the demos that we are going to show are executed from a jupyter notebook and are focused on showing the functionality of the AI models, the demo with the hardware is shown at the end of the repository. Demo

Drowsiness Monitor:

The function of this model is to make a detection of distraction or closed eyes of the driver for more than 2 seconds or he is distracted from the road (for example, looking at the cell phone).

Details: https://github.com/altaga/DBSE-monitor/blob/master/Drowsiness

Video: Click on the image Torch

Driving Monitor:

The function of this model is to detect objects that are less than 3 meters from the car at the blind spot.

Details: https://github.com/altaga/DBSE-monitor/blob/master/YoloV3

Video: Click on the image Torch

Emotion Monitor:

The function of this model is to detect the driver's emotions at all times and through musical responses (songs) try to correct the driver's mental state, in order to keep him neutral or in a good mood while driving, thus reducing the risk of accidents.

Details: https://github.com/altaga/DBSE-monitor/blob/master/Emotion%20detection

Video: Click on the image Torch

Jetson Nano Setup:

The setup process to run everything on the jetson nano are in this folder:

https://github.com/altaga/DBSE-monitor/tree/master/Jetson

The Final Product:

Product:

Product installed inside the car:

Notifications:

Epic DEMO:

Video: Click on the image Car

Sorry github does not allow embed videos.

Commentary:

I would consider the product finished as we only need a little of additional touches in the industrial engineering side of things for it to be a commercial product. Well and also a bit on the Electrical engineering perhaps to use only the components we need. That being said this functions as an upgrade from a project that a couple friends and myself are developing and It was ideal for me to use as a springboard and develop the idea much more. This one has the potential of becoming a commercially available option regarding Smart cities as the transition to autonomous or even smart vehicles will take a while in most cities.

That middle ground between the Analog, primarily mechanical-based private transports to a more "Smart" vehicle is a huge opportunity as the transition will take several years and most people are not able to afford it. Thank you for reading.

References:

Links:

(1) https://medlineplus.gov/healthysleep.html

(2) http://www.euro.who.int/__data/assets/pdf_file/0008/114101/E84683.pdf

(3) https://dmv.ny.gov/press-release/press-release-03-09-2018

(4) https://www.nhtsa.gov/risky-driving/drowsy-driving

(5) https://www.nhtsa.gov/risky-driving/speeding