in

Prototype for Baby Action Detection and Classification


An attempt to harness the power of Deep Learning to come up with a solution that can let us detect various classes of activities an infant, toddler or a baby is performing in real-time. This POC can then be published as an end-to-end deployable cloud project.

The model does not restrict predictions for babies only, it is applicable to all entities that appears in a human posture. So temporary, this needs to be handled at project level.

Special thanks to nicknochnack/ActionDetectionforSignLanguage repository for putting up such helpful content. Without it this project might have never existed.

Data collection

Data was collected from YouTube video clips. Human pose keypoints were extracted with the help of MediaPipe.

Classes trained

  1. Baby Walking
  2. Baby Still (no movement, can be considered as sleeping)
  3. Baby Crawling

Create a new environment and use below command for installing all required packages

pip install -r- requirements.txt

  1. Rename your baby video as input.mp4 and place it inside /raw directory.
  2. Open cmd and traverse to the project directory.
  3. To run the prediction script, just do:

python prediction.py

Full Demo – Drive Link

GIF
GIF

GitHub

GitHub – shreyas-jk/Baby-Action-Detection-Safety-System-Prototype: Prototype for Baby Action Detection and Classification

Prototype for Baby Action Detection and Classification – GitHub – shreyas-jk/Baby-Action-Detection-Safety-System-Prototype: Prototype for Baby Action Detection and Classification


Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

GIPHY App Key not set. Please check settings

A React library to work with lottie animations inside React.js

A React Responsive gallery for viewing images and videos in a secure and simple manner