Speech Emotion Recognition for Affective Human Robot Interaction full report
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
seminar topics
Active In SP

Posts: 559
Joined: Mar 2010
14-03-2010, 09:08 PM

Speech Emotion Recognition for Affective Human Robot Interaction

We evaluate the performance of a speech emotion recognition method for affective human-robot interaction. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad, and surprised. After applying noise reduction and speech detection, we obtain a feature vector for an utterance from statistics of phonetic and prosodic information. The phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; the prosodic information includes pitch, jitter, and rate of speech. Then a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. To simulate a human-robot interaction situation, we record speech commands and dialogs uttered at 2m away from a microphone. Experimental results show that the proposed method achieves the classification accuracy of 58.6% while listeners give 60.4% with the reference labels given by speakersâ„¢ intention. On the other hand, the proposed method shows the classification accuracy of 51.2% with the reference labels given by the listenersâ„¢ majority decision.

Presented By:
Kwang-Dong Jang and Oh-Wook Kwon Department of Control and Instrumentation Engineering Chungbuk National University, Korea {kdjang,owkwon}@chungbuk.ac.kr

1. Introduction
A human conveys emotion as well as linguistic information via speech signals. The emotion in speech makes verbal communications natural, emphasizes a speakerâ„¢s intention, and shows oneâ„¢s psychological state. Recently there has been a lot of research activities for affective human-robot interaction with a humanoid robot by recognizing the emotion expressed through facial images and speech. In particular, speech emotion recognition requires less hardware and computational complexity compared to facial emotion recognition. A speech emotion recognizer can be used in an interactive intelligent robot which responds appropriately to a userâ„¢s command according to the userâ„¢s emotional state. It can be also embedded in a music player which suggests a suitable music list to the userâ„¢s emotional state. Emotion can be recognized by using acoustic information and/or linguistic information . Emotion recognition from linguistic information is done by spotting exclamatory words from input utterances and thus cannot be used when there are no exclamatory words. However, acoustic information extracted from speech signals is more flexible for emotion recognition than linguistic information because it does not require any speech recognition system to spot exclamatory words and can be extended to any other language. Among many features suggested for speech emotion recognition, we select the following acoustic information: pitch, energy, formats, tempo, duration, jitter, shimmer, mel frequency coefficient (MFCC), linear predictive coding (LPC) coefficient, and Teager energy. A pattern classifier based on support vector machines (SVM) classifies the motion by using the feature vector obtained from statistics of the acoustic information. We compare the performance of automatic emotion recognition when the reference labels are given by speakers and human listeners. This paper is organized as follows: Section 2 explains the base features extracted from speech and the pattern classifier. Section 3 describes the experimental results when the reference labels are supplied by human listeners and speakers. Section 4 concludes the paper.

for full report please see eurasipProceedings/Ext/SPECOM2006/papers/077.pdf
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  air muscles full report project report tiger 7 10,646 25-09-2013, 09:32 AM
Last Post: seminar projects maker
  INTELLIGENT WIRELESS VIDEO CAMERA USING COMPUTER full report computer science technology 9 15,675 04-03-2013, 10:11 AM
Last Post: seminar tips
  automatic vehicle locator full report computer science technology 5 6,664 25-01-2013, 01:01 PM
Last Post: Guest
  programmable logic controller plc full report computer science technology 16 28,455 07-12-2012, 11:42 AM
Last Post: seminar tips
  Biomedical Instrumentation full report seminar tips 0 602 13-11-2012, 06:06 PM
Last Post: seminar tips
  FUZZY LOGIC IN EMBEDDED SYSTEMS full report computer science technology 2 4,447 03-10-2012, 11:49 AM
Last Post: seminar tips
  PH Control Technique using Fuzzy Logic full report computer science technology 5 8,148 28-08-2012, 04:30 PM
Last Post: Guest
  Fingerprint Recognition based on Silicon Chips seminar class 2 2,508 05-07-2012, 08:23 PM
Last Post: Guest
  INSTRUMENTATION CONTROL full report project topics 4 8,055 11-02-2012, 12:00 PM
Last Post: seminar addict
  SPEECH RECOGNITION USING DSP full report computer science technology 18 18,379 16-01-2012, 12:04 PM
Last Post: seminar addict