Multiple face detection and recognition in real time

The facial recognition has been a problem very worked around the world for many persons; this problem has emerged in multiple fields and sciences, especially in computer science, others fields that are very interested In this technology are: Mechatronic, Robotic, criminalistics, etc. In this demonstration  the main goal is showing a face detector and recognizer in real time for multiple persons using Principal Component Analysis (PCA) with eigenface for implement it in multiple fields.

An example of EigenFaces:

FaceRecPro/Training.png
FaceRecPro/MultiFaceRec.png

Traffic Sign Detection

Traffic sign detection is a crucial component in an autonomous vehicle navigation system. For an automobile to navigate itself safely in an urban environment, it must be able to understand traffic signs

  • It should be able to read the speed limit, such that it will not received tickets for speeding and paid a premium on its insurance
  • It should be able to read traffic lights and stop on red
  • It should be able to read stop sign and yield to other vehicles which are also crossing the same intersection.

This demonstration aims to solve a small part of a autonomous vehicle navigation system, which detect stop sign from images captured by camera.

Stop Sign Detection
MonoAndroidTrafficSignDetectionResultNexusS.jpg

Speech Recognition & Text to Speech

If you are interested in computer text to speech (TTS) and speech recognition (SR), this demonstration is for you, it will demonstrates the speech technologies for more than 26 different languages :

TTS_and_SR_screenshot.jpg

Real-Time Tracking Of Human Eyes Using a Camera

Eyes are the most important features of the human face. So effective usage of eye movements as a communication technique in user-to-computer interfaces can find place in various application areas.

Eye tracking and the information provided by the eye features have the potential to become an interesting way of communicating with a computer in a human-computer interaction (HCI) system. So with this motivation, designing a real-time eye feature tracking software is the aim of this project.

The purpose of this demonstration is to implement a real-time eye-feature tracker with the following capabilities:

  • RealTime face tracking with scale and rotation invariance
  • Tracking the eye areas individually
  • Tracking eye features
  • Eye gaze direction finding
  • Remote controlling using eye movements

Natural Language Processing Tools

This application is a collection of natural language processing tools.

Currently it Demonstrates the following NLP tools:

  • a sentence splitter
  • a tokenizer
  • a part-of-speech tagger
  • a chunker (used to “find non-recursive syntactic annotations such as noun phrase chunks”)
  • a parser
  • a name finder
  • a coreference tool
  • an interface to the WordNet lexical database
Parser demo user interface

This application shows the generation of parse trees for English language sentences, as well as explores some of the other features of the Natural Language Processing.

Detect a written text’s language

The language detection of a written text is probably one of the most basic tasks in natural language processing (NLP). For any language depending processing of an unknown text, the first thing to know is which language the text is written in.

Automating Semantic Mapping of a Document With Natural Language Processing

Natural Language Processing (NLP) intends to enable computers to derive meaning from human or natural language input.  This demonstration extracts entities, keywords, topics, events, themes and concepts.  Other than themes and concepts, the results are essentially keywords or phrases.  The extracted “strings” often have an associated relevance or strength, count or frequency, and/or sentiment value.  We used the features of our NLP Engine to provide some filtering capabilities of RSS feeds, enabling the user to create filters based on the extracted strings and additional values.