[IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil...

6

Click here to load reader

Transcript of [IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil...

Page 1: [IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil (2011.07.3-2011.07.4)] 2011 Fourth International Conference on Ubi-Media Computing - Context

Context Aware Middleware for supporting Idea Generation Meetings in Smart Decision Rooms

Carlos Filipe Freitas a,b, António Meireles a, Lino Figueiredo a,b, João Barroso c, Carlos Ramos a,b

a) GECAD – Knowledge Engineering and Decision Support Group at ISEP b) ISEP - Institute of Engineering – Polytechnic of Porto, Porto, Portugal

c) UTAD - Universidade de Trás os Montes e Alto Douto, Apartado 1013, Vila Real, Portugal

Abstract—In a globalized world members of groups may be anywhere, and the need for ubiquitous Idea Generation emerged. This led to two main needs: the creation of Smart Decision Rooms prepared for this new reality and following the Ambient Intelligence paradigm; and the creation of context aware middle-ware. This paper describes OLAVAmI a context aware middle-ware system which was tested in LAID environment, a Smart Meeting Room. OLAVAmI allows video production focusing on the speaker, an audio-to-text conversion service, and the multimedia database of meetings produced in an autonomous way. To experiment OLAVAmI usage and functionalities one of the tools present in LAID test bed was used and the results are presented in this article.

Keywords— Smart Meeting Rooms; context aware middleware; Ambient Intelligence

I. Introduction The increasing competitiveness present in the business

world led people and organizations to take decisions in a short period of time, in a formal group setting, in specific spaces (e.g. meeting rooms) and supported by systems that support distributed and asynchronous meetings, naturally allowing an ubiquitous use that can add flexibility to the global organizational environment of today [1]. Such competitiveness demands for more flexibility and agility is also due to the short time-to-market for new products and services. This demand can be seen by the growing of an agile generation of managers who use new tools that like Social Networking and Virtual environments for competitive business. [2]

In parallel, Ambient Intelligent (AmI) concept is emerging and maturing as can be seen by the emerged wide range of applications in different areas [3][4] in past years and also by the proposal of general architectures like [5], referring that a general architecture of Environments that follow the Ambient Intelligence is composed by an intelligent layer and an operational layer. The Operational layer is composed by reliable sensing information, and actuation capabilities, such as raw sensors, sensors nets, GPS, Robots, communications or databases, etc. The Intelligent layer must incorporate AI methods and techniques and the tasks reserved to this Intelligent Layer are: Interpret the environment’s state; represent the information and knowledge associated with

the environment; model/simulate/represent entities in the environment; learning about the environment and associated aspects; interact with humans; and the fundamental acting on the environment.

Anind Dey defines context as any information that can be used to characterize the situation of an entity [6]. Context Awareness is one of the most desired concepts to include in Ambient Intelligence, the identification of the context is important for deciding to act in an intelligent way. Context Awareness means that the system has conscience about the current situation we are dealing with.

In this article we will start by exposing Smart Meeting Rooms projects present in literature giving a bigger focus to LAID environment details. We then present a context aware middle-ware for audio/video capture and editing. We then present some experiments we’ve conducted in an intelligent layer system that supports groups of people in Idea Generations Meetings, making use of the proposed middle-ware system.

II. Smart Meeting Rooms (SMR) Intelligent or Smart Meeting Rooms (SMR) are

environments that should support efficient and effective interactions among their occupants. SMR generic goal is normally referred as a system that supports multi-person interactions in the environment in real time, but also as a system that is able to remember the past, enabling review of past events and the reuse of past information in an intuitive, efficient and intelligent way [7]. Besides this classical definition, in reference [8] it is said that SMR should also support the decision making process considering the emotional factors of the intervenient participants, as well as the argumentation process. So from this last definition we can say that SMR spaces will have to understand what is going on in the meetings and even what participants are discussing.

In the field we can find interesting projects. Reference [9] has a good survey on existing SMR projects. Here we present some of the most important. SMaRT [10] is intended to provide meeting support services that do not require explicit human-computer interaction, enabling the room to react appropriately to users’ needs maintaining the focus on their own goals. It supports human-machine, human-human, and human-computer-human interactions

Fourth International Conference on Ubi-Media Computing

978-0-7695-4493-9/11 $26.00 © 2011 IEEE

DOI 10.1109/U-MEDIA.2011.24

53

Page 2: [IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil (2011.07.3-2011.07.4)] 2011 Fourth International Conference on Ubi-Media Computing - Context

providing multimodal and fleximodal interfaces for multilingual, and multicultural meetings.

Literature also refers M4 (Multi Modal Meeting Manager) project as a large-scale project funded by the European Union in its 5th Framework Programme [11]. M4 aim is to design a meeting manager that is able to translate the information that is captured from microphones and cameras into annotated meeting minutes that are allowed for high-level retrieval questions, as well as summarization and browsing. It is concerned with the building of a demonstration system to enable structuring, browsing, and querying of an archive of automatically analyzed meetings.

There is also AMI (Augmented Multi-party Interaction) project [11] concerned with new multimodal technologies to support human interaction, in the context of smart meeting rooms and remote meeting assistants. It aims to enhance the value of multimodal meeting recordings and to make human interaction more effective in real time.

The tests performed to evaluate our proposal were performed on LAID (Laboratory of Ambient Intelligence for Decision Support) a SMR that has been developed in GECAD research center [12] [8]. This smart meeting room is an Intelligent Environment to support decision making meetings [12][13][9] and supports distributed and asynchronous meetings, so participants can participate in the meeting where ever they are. The software included is part of an ambient intelligence environment for decision making where networks of computers, information and services are shared [9]. In these software applications emotions handling as well as personality are also included. The middleware used is in line with the live participation/supporting of the meeting and it is also able to support the past review. The way this support is given will be explored more ahead with the following hardware: an Interactive 61’’ plasma screen, Interactive holographic screen, Mimio® Note grabber, Six interactive 26’’ LCD screens, each one for 1 to 3 persons (red), 3 cameras, Microphones and activating terminals controlled by a CAN network. With this hardware it is possible to gather all kind of data produced on the meeting, and facilitate their presentation to the participants, minimizing the middleware issues to the software solutions that intend to catalog, organize and distribute the meeting information.

At the software level LAID is equipped with a system that supports the decision making process. Particularly this system supports persons in group decision making processes considering the emotional factors of the intervenient participants, as well as the argumentation process. The modules that compose this SMR are the following: IGTAI, WebMeeting Plus, ABS4GD, WebABS4GD, and the pervasive hardware already referred.

These tools mainly IGTAI and WebMeeting Plus where lacking in the audio and video support they were capable to give. To fill this gap we develop a pervasive system, that will be described in the current article, named OLAVAmI, allows that LAID tools [14][9] have the possibility to work more independently and in a more effective way, introducing produced multimedia contents and enhancing the users’ capabilities with an improved past memory.

Video content is produced in an autonomous way because the system is able to autonomously perceive who is the speaker and points the best unused camera to him. Then an automatic conversion of audio to text feature will allow by one hand to easily introduce the input data in the tools and by the other hand the tools will be able to request to OLAVAmI key moments of meetings in audio and video format. Ambient Intelligence concept is achieved, because the SMR is able to perceive who is talking; it is able to gather multimedia content; on the top of that content is produced more useful information that will be used as input of intelligent systems which by themselves are able to reason on such information and provide relevant knowledge to the users.

On the following subsections we will detail more about LAID modules.

A. IGTAI This is a Multi Agent system developed in GECAD

research centre, which main feature is to support Groups in the process of Decision Making in the idea generation phase. It handles the information generated by the Group in terms of Ideas, Alternatives and Criteria in a tree structure based on brainstorming and mind Mapping. Concerning the group support it follows the brainstorming rules, however, the Issue Based Information System paradigm can be also introduced, allowing the criticism to an Idea Generation session. In a post-meeting phase this tool allows to review the meeting ideas and some statistics about that meeting. It has two different clients; one to desktops and another to cell phones, both build on java technologies, providing to meeting participants the availability of group memory anywhere they are.

It can be seen as simple tool capable of being used by users with little experience in informatics systems, with group knowledge management, ubiquitous access, user adaptiveness and proactiveness, platform independence and the formulation of a multi – criteria problem at the end of a work session which enables his users to maximize important process gains like, for instance, more information, synergy, learning, stimulation, more objective evaluation and the also minimizes process losses like domination of some members, failure to remember among others.

54

Page 3: [IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil (2011.07.3-2011.07.4)] 2011 Fourth International Conference on Ubi-Media Computing - Context

B. ABS4GThere ar

decision maspecific decithem traininand knowlemeeting. Frand simulatifor Group simulator symaking pargumentativparticipants OCC model considers epossible argu

The simumore relevanhuman partiprocess is inand by the e

A databamodel is maduring the important toin order to smeeting parused by onscenarios, toparticipants a decision suthis decisionthey will con

C. UbiquiThe Ubi

supports peconsidering intervenient process [16participants’Five Factor Pleasure-AroThis system making, a pwhere netwoare shared. considered different loctheir officesto differentphones, or eroom or of this also asyn

GD Simulator re two differakers. The firision situationng facilities inedge to be urom the ArgEion tool, ABSDecision) th

ystem whose aprocesses, cve factors ofemotions is u[15]. The dec

emotional aspumentation beulator is compnt are the paricipants of a nfluenced by txchanged argu

ase of profileaintained and tdifferent inte

o notice that thubstitute a me

rticipants. Thee or more pao identify pos(in this way itupport systemn support sysnsider emotion

itous Group Diquitous Gro

ersons in gropersonality, eparticipants

6] [8]. Differ’ personality,

model [17], ousal-Dominais intended to

part of an amorks of compAs an exampa distributed

cations (somes, possibly in t devices (eeven embeddedheir clothes).

nchronous, so

rent ways torst one is supn. The second n order to acqused in a reaEmotionAgentS4GD (Agent hat consists aim is to simulconsidering f the participused the revi

cision making pects and setween meetinposed of severticipant agentmeeting. Thi

the emotional uments. s and historythis model is bractions withhis simulator eeting or evene simulator is articipants to ssible trends at can be seen a

m). However, tstem are not jns [12].

Decision Suppoup Decision

oup decision emotion and m

as well as rent models emotions and the OCC m

ance model o be used for imbient intelligputers, informple of a potend meeting inve in a meetindifferent coun.g. computerd systems as pThis meeting participants

o give suppopporting them one intends to

quire competeal decision gts project resBased Simul

on a multi-late group dec

emotional pants [8]. Forsed version osimulation preveral round

ng participantseral agents, buts that simulats decision mastate of the a

y with the grobuilt incremen the system. was not devel

n to substitute a tool that casimulate pos

and to assist as a what-if tothe criteria usejust rational,

ort System Support Sy

making procmood factors othe argumentare used formood, such a

model [15] and[18] respectiintelligent decgence environ

mation and serntial scenario,volving peop

ng room, othentries) with a

rs, PDAs, mpart of the meis distributed do not need t

ort to m in a o give encies group sulted lation agent cision

and r the

of the ocess

ds of .

ut the te the aking

agents

oup’s ntally It is

loped some an be ssible these

ool of ed by since

ystem cesses of the tation r the as the d the ively. cision nment rvices , it is le in

ers in access mobile eeting but it to be

invsysinfmualt

difParthealtFatheparimassrephavas betandjusthe

IVi

SmFigper

Amproconmethe

OL

volved at any stem, a meeformation as ulticriteria-proernatives that The multi-ag

fferent types rticipant agene meeting in iternatives decilitator agente end, will rrticipants invo

mportant role sisting the ppresents the usve the same pif it were thetween the partd argumentatstify possible e group about

III. OLAVAmideo productio

OLAVAmI ismart Meeting Rgure 1, has arsons each.

Figure

OLAVAmI cmI Space andoduction focunversion serveetings produce speakers. Following p

LAVAmI, com

time. Howeveeting particip

it appears. Toblems, wheare evaluated

gent architecof agents: th

nt. The Facilitats organizationefinition). Dt will coordineport the resolved. The pin the group

participant of ser in the virtu

personality ande real participaticipants for a tion-based apchoices and the best or wo

mI – An Operaton and distribu

Conts a pervasive Room. LAID a U-shape tab

e 1 - Cameras Ang

corresponds tod its features using in the vice, and thced in an auton

paragraphs pmponents desc

er, when interant may wi

The system iere there by various deture model

he Facilitator ator agent is rn (e.g. decisio

During the nate all the pro

ults of the marticipant agep decision su

f the meetingual world andd to make the ant user. For tconsensual so

pproach, sincconvince othe

orst alternative

tional System ution in Ambietext middleware tlayout, whichble with 6 m

gles and LAID la

o an operatioinclude an au

speaker, ane multimedianomous way a

present a dcription which

acting with thsh to receivits focused o

are severaecision criteria[13] has twagent and thresponsible fo

on problem anmeeting, th

ocesses and, ameeting to thent has a verupport systemg. This agend is intended t same decisiothe negotiatioolution its usece agents caer elements oes.

For Audio & ent Intelligenc

tested in LAIDh is exposed imodules, for

ayout

onal layer of utomatic viden audio-to-texa database oand focused o

description oh can be seen i

he ve on al a.

wo he or nd he at he ry m nt to on on ed an of

ce

D in 3

a eo xt of on

of in

55

Page 4: [IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil (2011.07.3-2011.07.4)] 2011 Fourth International Conference on Ubi-Media Computing - Context

Figure 2 in a

Figu

The interby a TerminMaster Modall the systeCAN networComputer (administratoand control case of aadministratoautomaticallextra non configuratiolater use.

In ordermicrophonespreamplifierModule (AVdetection sycontrol (AGmicrophone monitoring tfunctions onaudio line ato video cposition andoverall inforexecutes thepending task

Video Swswitching. Itsource signaa video walMM starts AVM attachcamera to channel.

a block diagram

ure 2 - Pervasive

action betweenal Module (Tdule (MM) byem activities inrk and an USB(PC) connector to interact wthe system ev

an autonomoor, this modly. To enable

volatile mns and to sto

r to collect s are used andr, that is coVM) controlleystem in the o

GC) to prevent needs to be

the audio linen the preampliand prevent feameras, this

d executes thermation is cene tasks requeks, AVM only witching Modut is responsib

al and to send ll. To select t

evaluating thhed to audio speaker and

m.

operational syste

en system and TM), that comy a CAN netwn the meetingB connection ption. This al

with system, mven if it is in tous use, wdule control this feature,

memory to ore the system

audio signd are connecteontrolled by er that also h

output signal, feedback issu

e activated the activity. So fier, monitors

eedback issuesmodule con

e requests frontered in MMested by MM monitors the ule (VS) is usele to select thit to the PC a

the respectivehe respectivesystem and thswitches the

em overview

users is perfommunicate witwork. MM cong room througperform a Perllows the sy

meeting particithe room or noithout a syls all the

the hardwaresupport diffm information

nals unidirected to a microp

an Audio-Vhas, through an automatic

ues, to know whe system is AVM controaudio levels i

s. When connntrols the caom MM. Sinc

M, this moduleM. In case oattached part.ed for video sohe respective vand, if necessae video sourcee speaker thrhen points a ve respective v

ormed th the ntrols

gh the rsonal ystem ipants ot. In ystem parts

e has ferent n for

tional phone Video

peak gain

which also

ols all in the

nected amera ce the only

of no

ource video ary to e, the rough video video

camwitandextMM

theinfmethechaeas

rangenpar

of syspocampooutfinsenpardis

conadmnonoutneeshoperlinprea p

Video signalmeras. These th PAN angled 18 times ternally by thM. When the me

e meeting information caneeting participe communicataracteristic of sy addition of Video camer

nge can be sneral plans, rticipants in thVideo and aua direct edi

stem is able tinting a videmera (2 and sition, anothetput video sig

nal results. Thent to a viderticipants on tstance participApart of auntrolled by anministrator. Tn-standard optside of the eds to have soows him the syrmissions of th

ne of the meetiesident or the presentation.

Figu

ls are collectevideo camera

e from -170º zoom lens,

e AVM modu

eeting starts, anformation inn be seen autopants. The CANtions with all

f this network f new modulesras are controseen in Figurand at least

he meeting rooudio signals thiting, withoutto recognize theo camera in 3 in Figure

er camera (1 gnal to prevee final video seo wall, to the meeting,

pants as in a teutonomous w

n operator on tThe system admperations and meeting room

ome multimedystem status ahe interveniening, for exampquestions can

ure 3 - OLAVAmI

ed by, at leasas have intern

to +170º, 12which can

ule and autom

all of the particn their TM omatically, orN network is l parts of theincludes the c. lled automaticre 2. Camera

two more, om. hat arrive to PCt any post-prhe meeting inhis/her direc

2) is movingin Figure 2)

ent undesired signal is record

the terminalor through in

eleconference. work, the sythe PC, here caministrator cafor monitorin

m. Here, the dia skills. The and he can adjunts, dependingple, nobody can only be mad

I Software overvi

st, three videal PAN&TILT0º TILT anglbe controlle

matically by th

cipants can sedisplay. Thi

r requested bresponsible foe system. Oncapability of a

cally and theia 1 covers th

cover all th

C are the resuroduction. Thntervenient anction. While g to their fina) supports thmovements ided and can bl monitors o

nternet for lon

ystem can balled as system

an be useful fong the meetin

operator onlPC applicatioust or force th

g on the straighan interrupt thde at the end o

iew

eo T le ed he

ee is

by or ne an

ir he he

lt he nd

a al he in be of ng

be m or ng ly on he ht he of

56

Page 5: [IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil (2011.07.3-2011.07.4)] 2011 Fourth International Conference on Ubi-Media Computing - Context

External performs theoverview of

On a PC C#. It allowhim to contapplication avideo informthe availabtransformed Database for

The AV bto a multime

On the toplaced anothconvert the API, allowispeech with

The intelfrom the Maccessing toto the videoThe multimeor the entire

As integroperational IGTAI, who

Despite tpervasive haof usability tof informatimeeting palimited to Alternativesintended to the users witlast feature meeting sumwhen the usis also possithe user shovideo.

To allow other agentsthe operatioagent is respweb servicbetween theare presenteadded new gin-meeting pIGTAI.

agents’ intee interoperabi

f this module is is running a

ws the operatotrol the wholealso has the remation that coble informati

in XML formr future retrievboard signal isedia server, e.gop of the Audher real timemeeting audioing others aptime stamps. ligent layer ca

Media Databas a multimedia

o and xml fileedia requests meeting video

IV. Eration test casystem is pr

o represents thethis tool has ardware that ithis tool has sion gathering

articipants, anadd content

limiting a pgive multimedth meeting revshould be pre

mmaries, showsers where disible to access ows the desir

such features s to IGTAI. Oonal text modponsible to coe module (v

e agents’ inteed in Figure 4graphical companel and anot

eraction is ility with Intes show on FigCore Applica

or to interact we pervasive haesponsibility toomes from AVion in the mat, and storeval. s split in ordeg. teleconferendio - Video ste service thato to text, usinpplications to

an request Muse. This can a Web Servicees created by include smallo.

Experiments ase for the presented an ae intelligent labeen designe

s present in Lome gaps, inc, which must

nd the multit to the nopost-meeting dia support inviews with audesented to userwing only thescussing somethe complete

re to access t

in one hand wOne is responsdule, named teommunicate wvideoAgent).

erfaces and op4. By the othemponents, one ther to the pos

the module elligent Layer

gure 3. ation developwith MM enaardware. Thiso collect audio

V board, as weMM, whic

es them in a M

r to publish a nce use. treaming servit is responsibng the Java Spo get the me

ultimedia packbe performe

e, which has aa core applica meeting segm

previous descadapted versioayer system. ed to be use

LAID [14] in tcluding, the prt be typed byimedia suppoodes of Idea

review. Thun oro der to prdio and video.r in a first han

e key momente particular id

review but onthe whole me

we have to devsible to conneextAgent. An

with the operatThe relation

perational serer hand it wasto be added t

st-meeting pan

that r. An

ed in abling core o and ell as h is

Media

copy

ice is ble to peech eeting

kages ed by access ation. ments

cribed on of

ed by terms ocess y the

ort is as or s we resent . This nd as ts, or

dea. It nly if eeting

velop ect to nother tional nship rvices s also to the nel of

hascomwithecurvidpieserwa

allnowhsenis n

revkeyprobef

mistilrecindimquSu

The in-meetins one instancmmunicate wll present the e user to selecrrent problemdeoAgent is nece of video orvice of OLAVas created.

Fig

New controlsowing users des by the v

hole meeting rnd a request tonot attached toSuch feature

views focusedy points occuoblem so the fore the Idea w

VThe inform

iddleware whill under its pocognition and/dexing service

mproving the eries and not

uch knowledge

ng GUI, whice of a textAgith OLAVAmtext of the m

ct a piece of tm. When suchnotified by theof the last minVAmI adding

gure 4 – Interactio

s were addedto play the v

videoAgent, anreview the GUo the Operatioo the problem s allow the s

d on the key pur when IdeasvideoAgent o

was selected in

V. Discussionmation output

ich is handledossibilities. Fo/or voice recoe and consequpossibility tobe limited to

e would certai

ch can be seegent and if it

mI speech servmeeting to the

text to create h selection is e GUI in ordernutes to the opit to the new

on between layers

d to the post-video tracks pnd if the useUI notify the

onal Web Servdatabase.

system to prepoints of the s are added t

only has to savn the text.

n / conclusion tted from d by the coreor instance, thognition wouluently the kn

o infer more o meetings/teminly improve t

en in Figure 4is possible t

vice the systemuser, allowinan idea to thperformed th

r to require thperational weIdea node tha

s

-meeting paneplaced in Ideer requires thvideoAgent t

vice, however

esent meetingmeeting, thosto the meetinve the minute

the proposee application ihe use of faciald improve thnowledge base

and differenmporal queriesthe multimedi

4, to m ng he he he eb at

el ea he to it

gs se ng es

ed is al he e, nt s. ia

57

Page 6: [IEEE 2011 4th International Conference on Ubi-Media Computing (U-Media) - Sao Paulo, Brazil (2011.07.3-2011.07.4)] 2011 Fourth International Conference on Ubi-Media Computing - Context

presentation capabilities; however, the currently used temporal dimension is considered very important and allows meetings reviews to be focused in meeting key points. The core application also lacks on the point that the amount of metadata produced is limited, a good way to fill this gap is to improve the Knowledge Base with inferable metadata and domain and task ontologies.

The audio to text converter can also be improved with meta-data and once again the domain and task ontologies are desirable. This would allow inferring non-obvious knowledge, improve the visualization, and the interoperability and would allow the system to evolve maintaining the retro-compatibility.

A first step to accomplish such improvements would be the ontology specification proposal that covers the meeting task domain which is an undergoing work. Besides that all other previous limitations already mentioned are in the center of our future work.

Concerning the strengths of the system we can highlight the usage of audio sensing to point the available cameras to the speaker, enabling a production of multimedia data of meeting, focused on the participants, and it also improves the meeting reviews and more intuitive broadcasts sessions that decreases the effort of remote participants to understand who is speaking. With this we will be giving remote participants more time to center the attention on what is being discussed.

OLAVAmI also highly increases the usability of group decision support systems tools present in LAID because it is able to provide the input data necessary to these systems decreasing, once again, the users effort. With the use of OLAVAmI features, it has been demonstrated that the IGTAI usability has increased because the effort to introduce data was decreased due to the audio to text service and when such feature is used key moments of the Idea Generation Meeting are automatically added to the correspondent Ideas/Alternatives nodes. Such a feature is able to improve the review of past meetings because the user is able to watch only the key moments of the meeting or the whole meeting. And for last, distributed meeting participants can also benefit with improved multimedia transmissions that are automatically focused on speakers.

Acknowledgments The authors would like to acknowledge FCT, FEDER,

POCTI, POSI, POCI, POSC, and COMPETE for their support to R&D Projects and GECAD.

References [1] Freitas C, Marreiros G, Ramos C, IGTAI - An Idea Generation Tool for

Ambient Intelligence. In: 3rd IET International Conference on Intelligent Environments, pp 391–397, 2007Ulm, Germany

[2] Goldman S.L., R.N. Nagel, B.D. Davison, and P.D. Schmid, “4. Next Generation Agility: Smart Business and Smart Communities,” The Network

Experience, 2008, pp. 49-55 [3] Nakashima H., Aghajan H., and Augusto J.C., eds., Handbook of Ambient

Intelligence and Smart Environments, Springer, 2009. [4] Cook D.J., Augusto J.C., and Jakkula V.R., “Ambient intelligence:

Technologies, applications, and opportunities,” Pervasive and Mobile Computing, vol. 5, Aug. 2009, pp. 277-298.

[5] Ramos C., Augusto J.C., Shapiro D., Ambient Intelligence: the next step for AI, IEEE Intelligent Systems magazine, vol 23, n. 2, pp.15-18, 2008.

[6] Dey, Anind K.: Understanding and Using Context. Personal Ubiquitous Computing, 5 (1): 4–7, 2001

[7] Mikic I, Huang K, Trivedi M Activity monitoring and summarization for an intelligent meeting room. In: Workshop on Human Motion, 2000, pp 107–112

[8] Marreiros G, Santos R, Ramos C, Neves J; Context-Aware Emotion-Based Model for Group Decision Making; IEEE Intelligent Systems magazine; vol 25, n. 2, pp. 31-39; 2010

[9] Ramos C., Marreiros G., Santos R., Freitas C.F., Smart Offices and Intelligent Decision Rooms, in Handbook of Ambient Intelligence and Smart Environments (AISE), H. Nakashima, J. Augusto, H. Aghajan (ed.), Springer, 2009.

[10] Waibel, A., Schultz, T., Bett, M., Denecke, M., Malkin, R., Rogina, I., Stiefelhagen, R., Jie Yang, SMaRT: the Smart Meeting Room Task at ISL, in IEEE International Conference on on Acoustics, Speech, and Signal Processing, 2003, 281-286

[11] Nijholt, A.: Meetings, gatherings, and events in smart environments. In Proceedings of the 2004 ACM SIGGRAPH international Conference on Virtual Reality Continuum and Its Applications in industry, 2004, 229-232

[12] Marreiros G, Santos R, Freitas C, Ramos C, Neves J, Bulas-Cruz J, Laid - a smart decision room with ambient intelligence for group decision making and argumentation support considering emotional aspects. International Journal of Smart Home 2(2):77–94, 2008

[13] Marreiros G, Santos R, Ramos C, Neves J, Novais P., Machado J, and Bulas-Cruz J, ‘Ambient intelligence in emotion based ubiquitous decision making’, in Proceeedings of the International Joint Conference on Artificial Intelligence (IJCAI 2007) - 2nd Workshop on Artificial Intelligence Techniques for Ambient Intelli-gence (AITAmI’07), pp. 86–91, (January 2007).

[14] Freitas C.F., Marreiros G, Ramos C, Santos R, "Hardware and Software in Smart Decision Rooms" in EPIA 2007 - Portuguese Conference on Artificial Intelligence Workshop on Ambient Intelligence Technologies and Applications (AmITA), 2007, Guimaraes - Portugal, pag.355 – 366

[15] Ortony A, Emotions in Humans and Artifacts, chapter On making believable emotional agents believable, MIT Press, 2003.

[16] Santos R, Marreiros G, Ramos C, Neves J, and Bulas-Cruz J, ‘Multi-agent approach for ubiquitous group decision sup- port involving emotions’, in Ubiquitous Intelligence and Computing, volume 4159/2006 of Lecture Notes in Computer Science, pp. 1174– 1185. Springer Berlin / Heidelberg, 2006.

[17] Costa P. T. and McCrae R. R., ‘Revised neo personality inven- tory (neo-pi-r) and neo five-factor inventory (neo-ffi) professional man- ual’, Psychological Assessment Resources, 1992.

[18] Mehrabian A, ‘Analysis of the big-five personality factors in terms of the pad temperament model’, Australian Journal of Psychology, 48(2), 86–92, 1996

58