[go: nahoru, domu]

WO2017173141A1 - Persistent companion device configuration and deployment platform - Google Patents

Persistent companion device configuration and deployment platform Download PDF

Info

Publication number
WO2017173141A1
WO2017173141A1 PCT/US2017/025137 US2017025137W WO2017173141A1 WO 2017173141 A1 WO2017173141 A1 WO 2017173141A1 US 2017025137 W US2017025137 W US 2017025137W WO 2017173141 A1 WO2017173141 A1 WO 2017173141A1
Authority
WO
WIPO (PCT)
Prior art keywords
pcd
user
behavior
animation
skill
Prior art date
Application number
PCT/US2017/025137
Other languages
French (fr)
Inventor
Cynthia Breazeal
Avida Michaud
Francois LABERGE
Jonathan Louis ROSS
Carolyn Marothy SAUND
Fardad Faridi
Original Assignee
JIBO, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIBO, Inc. filed Critical JIBO, Inc.
Priority to CA3019535A priority Critical patent/CA3019535A1/en
Priority to JP2019502527A priority patent/JP2019521449A/en
Priority to KR1020187031496A priority patent/KR102306624B1/en
Publication of WO2017173141A1 publication Critical patent/WO2017173141A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0003Home robots, i.e. small robots for domestic use
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/181Enclosures
    • G06F1/182Enclosures with special features, e.g. for use in industrial environments; grounding or shielding against radio frequency interference [RFI] or electromagnetical interference [EMI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/20Cooling means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Definitions

  • the present application generally relates to a persistent companion device.
  • the present application relates to an apparatus and methods for providing a companion device adapted to reside continually in the environment of a person and to interact with a user of the companion device to provide emotional engagement with the device and/or associated with applications, content, services or longitudinal data collection about the interactions of the user of the companion device with the companion device.
  • the present disclosure relates to methods and systems for providing a companion device adapted to reside continually in the environment of a person and to interact with a user of the companion device to provide emotional engagement with the device and/or associated with applications, content, services or longitudinal data collection about the interactions of the user of the companion device with the companion device.
  • the device may be part of a system that interacts with related hardware, software and other components to provide rich interaction for a wide range of applications as further described herein.
  • a development platform for developing a skill for a persistent companion device comprises an asset development library having an application programming interface (API) configured to enable a developer to at least one of find, create, edit and access one or more content assets utilizable for creating a skill that is executable by the PCD, an expression tool suite having one or more APIs via which receive one or more expressions associated with the skill as specified by the developer wherein the skill is executable by the PCD in response to at least one defined input, a behavior editor for specifying one or more behavioral sequences of the PCD for the skill and a skill deployment facility having an API for deploying the skill to an execution engine for executing the skill.
  • API application programming interface
  • a platform for enabling development of a skill using a software development kit comprises a logic level module configured to map received inputs to coded responses and a perceptual level module comprising a vision function module configured to detect one or more vision function events and to inform the logic level module of the one or more detected vision function events, a speech/sound recognizer configured to detect defined sounds and to inform the logic level module of the detected speech/sounds and an expression engine configured to generate one or more animations expressive of defined emotional/persona states and to transmit the one or more animations to the logic level module.
  • SDK software development kit
  • Skill development platform methods and systems include a system for developing a skill for a persistent companion device (PCD).
  • the system may include an asset development library that is accessible via an application programming interface (API) executing on a processor, configured to enable a developer to at least one of find, create, edit and access one or more content assets utilizable for creating a skill that is executable by the PCD.
  • API application programming interface
  • the system may also include an animation tool suite executing on the processor and having one or more APIs via which operation of one or more physical elements of the PCD for the skill including at least two of an electronic display, a plurality of movable body segments, a speech output system, and a multi-color light source are specified by the developer, wherein the skill is executable by the PCD in response to at least one input that is defined by the developer.
  • They system may also include a behavior editor executing on the processor for specifying one or more behavioral sequences of the PCD for the skill.
  • the system may include a skill deployment facility executing on the processor and adapted for deploying the skill to an execution engine for executing the skill. In this system, the skill deployment facility may deploy the skill via an API. Additionally, the behavior editor may facilitate operation of a sensory input system and an expressive output system of the PCD.
  • Skill development SDK methods and systems may include a system for enabling development of a persistent companion device (PCD) skill using a software development kit (SDK) that may include a logic level mapping system operating on a processor configured to map received inputs to the PCD to coded responses.
  • the system may also include a PCD behavior tool suite operating on the processor adapted to configure a perceptual engine of the PCD; the tool suite including a vision function system configured via the behavior tool suite to detect one or more vision function events and to inform the logic level mapping system of the one or more detected vision function event, and a speech/sound recognition and understanding system configurable by the behavior tool suite to detect defined sounds and to inform the logic level mapping system of the detected speech/sounds.
  • the system may also include a PCD animation tool suite operating on the processor adapted to configure an expression engine to generate one or more animations expressive of at least one defined state in response to at least one input and to transmit the one or more animations to the logic level mapping system for mapping of the animations to the inputs.
  • the defined state may be at least one of an emotional state, a persona state, a cognitive state, and a state expressing a defined level of energy.
  • a PCD software development kit may include user interface methods and systems may include a system for configuring a persistent companion device (PCD) to perform a skill.
  • the system may include a software development kit executing on a networked server.
  • the SDK may include a plurality of animation user interface screens through which a user configures animation associated with the skill, the plurality of user interface screens facilitating specification of the operation of physical elements of the PCD including at least two of an electronic display, a plurality of movable body segments, a speech output system, and a multicolor light source.
  • the SDK may also include a plurality of behavior user interface screens through which the user configures behavior of the PCD for coordinating robot actions and decisions associated with the skill, the plurality of behavior user interface screens facilitating operation of an expressive output system of the PCD in response to a sensory input system of the PCD. Also, a graphical representation of the PCD in at least one of the animation user interface screens and the behavior user interface screens represents the movement of the PCD in response to inputs based on the configuration by the user.
  • the SDK may include a gaze orientation user interface screen through which a user configures the PCD to expressively orient a display screen of the PCD toward a target located in proximity to the PCD as a point in a three-dimensional space relative to the PCD, the PCD responding to the target in at least one of a single-shot mode and a continuous target-tracking mode.
  • Methods and systems may include a system for animating a persistent companion device (PCD) that may include an animation editor executing on a networked server providing access to PCD animation configuration and control functions of the PCD via a software development kit.
  • the system may also include an electronic interface to a PCD, the PCD configured with a plurality of interconnected moveable body segments, motors for rotation thereof, at least one light ring, an electronic display screen, and an audio system.
  • the system may include a PCD animation application programming interface via which the animation editor controls at least a portion of the features of the PCD.
  • the system may include a plurality of animation builders configurable by a user of the animation editor, the animation builders spawning animation instances that indicate active animation sessions.
  • the system may further include a behavior transition system for specifying transition of the PCD from a first animation instance to a second animation instance in response to a signal.
  • Methods and systems described herein may include a system for controlling behaviors of a persistent companion device (PCD).
  • the system may include a behavior editor executing on a networked server providing access to PCD behavior configuration and control functions of the PCD via a software development kit.
  • the system may also include a plurality of behavior tree data structures accessible by the behavior editor that facilitate controlling behavior and control flow of autonomous robot operational functions, the operational functions including a plurality of sensor input functions and a plurality expressive output functions, wherein the plurality of behavior tree data structures organize control of robot operational functions hierarchically, wherein at least one behavior tree data structure is associated with at least one skill performed by the PCD.
  • the system further including a plurality of behavior nodes of each behavior tree, each of the plurality of behavior nodes associated with one of four behavior states consisting of an invalid state, an in-progress state, a successful state, and a failed state.
  • the system also including at least one parent behavior node of each behavior tree, the at least one parent node referencing at least one child behavior node and adapted to initiate at least one of sequential child behavior node operation, parallel child behavior node operation, switching among child behavior nodes, and randomly activating a referenced child behavior node.
  • At least a portion of the behavior nodes are each configured with a behavior node decorator that functions to modify a state of its behavior node by performing at least one of preventing a behavior node from starting, forcing an executing behavior node to succeed, forcing an executing behavior node to fail, re-executing a behavior node.
  • Methods and systems described herein may include a system for recognizing speech with a persistent companion device (PCD).
  • the system may include a PCD speech recognition configuration system that facilitates natural language understanding by a PCD, the system comprising a plurality of user interface screens by which a user operates a speech rule editor executing on a networked computer to configure speech understanding rules comprising at least one of an embedded rule and a custom rule.
  • the system may further include a development kit comprising library of embedded speech understanding rules accessed by the user via the networked server.
  • the system may include a robot behavior association function of the software development by which a user associates speech understanding rules with at least one of a listen-type PCD behavior and a listen success decorator that the user configures to cause the PCD to perform an operation based on a successful result of a condition tested by the listen success decorator.
  • Methods and systems described herein may include a persistent companion device (PCD) control configuration system.
  • the system may include a PCD animation configuration system that facilitates controlling expressive output of the PCD through playback of scripted animations, responsive operation of the PCD for events detected by event listeners that are configurable by a user, and a plurality of animation layers that facilitate specifying animation commands.
  • the system may further include a PCD behavior configuration system that facilitates controlling mechanical and electronic operation of the PCD.
  • the system may also include a PCD gaze orientation configuration system that facilitates determining directional activity of the gaze of the PCD by specifying a target and a gaze PCD functional mode of at least one of single-shot and target-tracking.
  • controlling mechanical and electronic operation of the PCD through robot behavior comprises controlling transitions between animated behaviors, controlling a plurality of animated behaviors in at least one of parallel control and sequential control, and controlling a plurality of child behaviors based on a behavior tree of parent and child behaviors, wherein a child behavior is activated based on one of a switch condition for selecting among the child behaviors and randomly selecting among the child behaviors.
  • FIG. 1 illustrates numerous views of PCD.
  • FIG. 2 illustrates software architecture of the PCD.
  • FIG. 3 illustrates architecture of a psycho-social interaction module (PSIM).
  • PSIM psycho-social interaction module
  • FIG. 4 illustrates a task network that shows a simplified version of a greeting interaction by the PCD.
  • FIG. 5 illustrates hardware architecture of the PCD.
  • FIG. 6 illustrates mechanical architecture of the PCD.
  • FIG. 7 illustrates a flowchart for a method to provide a call answering and messaging service.
  • FIG. 8 illustrates a flowchart for a method to relay a story by the PCD.
  • FIG. 9 illustrates a flowchart for a method to indicate and/or influence emotional state of a user by use of the PCD.
  • FIG. 10 illustrates a flowchart for a method to enable story acting or animation feature by the PCD.
  • FIG. 11 illustrates a flowchart for a method to generate and encode back stories.
  • FIG. 12 illustrates a flowchart for a method to access interaction data and use it to address a user's needs.
  • FIG. 13 illustrates a flowchart for a method to adjust behavior of the PCD based on user inputs.
  • FIG. 14 illustrates an example of displaying a recurring, persistent, or semi-persistent, visual element.
  • FIG. 15 illustrates an example of displaying a recurring, persistent, or semi-persistent, visual element.
  • FIG. 16 illustrates an example of displaying a recurring, persistent, or semi-persistent, visual element.
  • FIG. 17 illustrates an exemplary and non-limiting embodiment of a runtime skill for a PCD.
  • FIG. 18 is an illustration of an exemplary and non-limiting embodiment of a flow and various architectural components for a platform enabling development of a skill using the SDK.
  • FIG. 19 is an illustration of an exemplary and non-limiting embodiment of a user interface that may be provided for the creation of assets.
  • FIG. 20 is an illustration of exemplary and non-limiting screen shots of a local perception space (LPS) visualization tool that may allow a developer to see the local perception space of the PCD.
  • FIG. 21 is an illustration of a screenshot of a behavior editor according to an exemplary and non-limiting embodiment.
  • LPS local perception space
  • FIG. 22 is an illustration of a formal way of creating branching logic according to an exemplary and non-limiting embodiment.
  • FIG. 23 is an illustration of an exemplary and non-limiting embodiment whereby select logic may be added as an argument to a behavior.
  • FIG. 24 is an illustration of an exemplary and non-limiting embodiment of a simulation window.
  • FIG. 25 is an illustration of an exemplary and non-limiting embodiment of a social robot animation editor of a social robot expression tool suite.
  • FIG. 26 is an illustration of an exemplary and non-limiting embodiment of a PCD animation movement tool.
  • FIG. 27 depicts a block diagram of an architecture of a social robot-specific software development kit.
  • FIG. 28 depicts a behavior tree snippet in which two behaviors are executed at the same time, then a second behavior, then a third behavior
  • FIG. 29 depicts a leaf behavior
  • FIG. 30 depicts a user interface display of sequential and parallel parent behaviors.
  • FIG. 31 depicts a user interface display of a behavior decorator.
  • FIG. 32 depicts a main behavior tree of a skill.
  • FIG. 33 depicts a user interface of a behavior editor for editing a behavior tree leaf.
  • FIG. 34 depicts a decorator configured to change a state of a behavior based on a measureable condition.
  • FIG. 35 depicts a user interface for specifying arguments of a behavior.
  • FIG. 36 depicts an illustration of the lifecycle of builders and instances across configuration, activation, and run/control.
  • FIG. 37 depicts a diagram that provides a map of the robot's individual DOFs, DOF value types, and common DOF groupings.
  • FIG. 38 depicts an animate module following a policy of exclusive DOF ownership by the most recently triggered animate instance.
  • FIG. 39 depicts an alternate embodiment for the embodiment of FIG. 38.
  • FIG. 40 depicts configuring a transaction with an animation.
  • FIG. 41 depicts timing of exemplary core animation events.
  • FIG. 42 depicts a timeline of events produced by two overlapping animation instances.
  • FIG. 43 depicts an example of a look -at orientation configuration interface.
  • FIG. 44 depicts including custom code in a behavior tree node for toggling between two different look-at targets.
  • FIG. 45 depicts a three coordinate system of the social robot referenced by the software development kit.
  • FIG. 46 depicts a user interface of the software development kit for editing animations.
  • FIG. 47 depicts a user interface for configuring a body layer for controlling the segments of the social robot.
  • FIG. 48 depicts a user interface for configuring an eye layer for controlling the representative eye image of the social robot.
  • FIG. 49 depicts a user interface for configuring an eye texture layer for controlling a texture aspect of the representative eye image of the social robot.
  • FIG. 50 depicts a user interface for configuring an eye overlay layer for controlling the representative eye image of the social robot.
  • FIG. 51 depicts a user interface for configuring an eye overlay texture layer for controlling the representative eye image of the social robot.
  • FIG. 52 depicts a user interface for configuring a background layer for controlling the background of a representative eye image of the social robot.
  • FIG. 53 depicts a user interface for configuring an LED disposed around a body segment of the social robot.
  • FIG. 54 depicts a user interface for configuring event.
  • FIG. 55 depicts a user interface for configuring an audio event layer.
  • FIG. 56 depicts a user interface of a speech rules editor.
  • FIG. 57 depicts an alternate speech rules editor user interface screen.
  • FIG. 58 depicts a listen behavior editor user interface screen.
  • FIG. 59 depicts an alternate listen behavior editor user interface.
  • FIG. 60 depicts another alternate listen behavior editor user interface screen
  • FIG. 61 depicts a MIM configuration user interface.
  • FIG. 62 depicts a MIM Rule editor user interface.
  • FIG. 63 depicts a flow editor of the PCD SDK. DETAILED DESCRIPTION
  • PCD Persistent Companion Device
  • PCD Persistent Companion Device
  • “PCD” and “social robot” may be used interchangeably except where context indicates otherwise.
  • PCD provides a persistent, social presence with a distinct persona that is expressive through movement, graphics, sounds, lights, scent.
  • digital soul refers to a plurality of attributes capable of being stored in a digital format that serve as inputs for determining and executing actions by a PCD.
  • environment refers to the physical environment of a user within a proximity to the user sufficient to allow for observation of the user by the sensors of a PCD.
  • This digital soul operates to engage users in social interaction and rapport-building activities via a social-emotional/interpersonal feel attendant to the PCD's interaction/interface.
  • PCD 100 may perform a wide variety of functions for its user.
  • PCD may (1) facilitate and supporting more meaningful, participatory, physically embedded, socially situated interactions between people/users and (2) may engage in the performance of utilitarian tasks wherein PCD acts as an assistant or something that provides a personal service including, but not limited to, providing the user with useful information, assisting in scheduling, reminding, providing particular services such as acting as a photographer, to help the family create/preserve/share the family stories and knowledge (e.g., special recipes), etc., and (3) entertaining users (e.g., stories, games, music, and other media or content) and providing company and companionship.
  • assistant or something that provides a personal service including, but not limited to, providing the user with useful information, assisting in scheduling, reminding, providing particular services such as acting as a photographer, to help the family create/preserve/share the family stories and knowledge (e.g., special recipes), etc.
  • entertaining users e.g., stories, games, music, and other media or content
  • various functions of PCD may be accomplished via a plurality of modes of operation including, but not limited to: i. Via a personified interface, optionally expressing a range of different personality traits, including traits that may adapt over time to provide improved companionship. ii. Through an expressive, warm humanized interface that may convey information as well as affect. As described below, such an interface may express emotion, affect and personality through a number of cues including facial expression (either by animation or movement), body movement, graphics, sound, speech, color, light, scent, and the like. iii. Via acquiring contextualized, longitudinal information across multiple sources (sensors, data, information from other devices, the internet, GPS, etc.) to render PCD increasingly tailored, adapted and tuned to its user(s).
  • PCD 100 incorporates a plurality of exemplary input/sensor devices including, for example, capacitive sensors 102, 102.
  • One or more Capacitive sensors 102 may operate to sense physical social interaction including, but not limited to, stroking, hugging, touching and the like as well as potentially serving as a user interface .
  • PCD 100 may further incorporates touch screen 104 as a device configured to receive input from a user as well as to function as a graphic display for the outputting of data by PCD 100 to a user.
  • PCD 100 may further incorporate one or more cameras 106 for receiving input of a visual nature including, but not limited to, still images and video.
  • PCD 100 may further incorporate one or more joysticks 108 to receive input from a user.
  • PCD 100 may further incorporate one or more speakers 110 for emitting or otherwise outputting audio data.
  • PCD 100 may further incorporate one or more microphones 112.
  • the software architecture 200 may be adapted to technologies such as artificial intelligence, machine learning, and associated software and hardware systems that may enable the PCD 100 to provide experience to life as an emotionally resonant persona that may engage people through a robotic embodiment as well as through connected devices across wide range of applications.
  • the intelligence associated with the PCD 100 may be divided into one or more categories that may encode the human social code into machines.
  • these one or more categories may be a foundation of a PCD's cognitive -emotive architecture.
  • the one or more categories may include but not limited to psycho-social perception, psycho-social learning, psycho-social interaction, psycho-social expression and the like.
  • the psycho-social perception category of intelligence may include an integrated machine perception of human social cues (e.g., vision, audition, touch) to support natural social interface and far-field interaction of the PCD 100.
  • the psycho-social learning category may include algorithms through which the PCD 100 may learn about people's identity, activity patterns, preferences, and interests through direct interaction and via data analytics from the multi -modal data captured by the PCD 100 and device ecosystem.
  • the PCD may record voice samples of people entering its near or far field communication range and make use of voice identification systems to obtain identity and personal data of the people detected. Further, the PCD may detect the UUID broadcasted in the Discovery Channel of BLE enabled devices and decode personal data associated with the device user.
  • the PCD may use the obtained identity and personal data to gather additional personal information from social networking sites like Facebook, Twitter, Linkedln, or similar.
  • the PCD may announce the presence and identity of the people detected in its near or far field communication range along with a display of the constructed personal profile of the people.
  • the psycho-social interaction category may enable the PCD 100 to perform proactive decision making processes so as to support tasks and activities, as well as rapport building skills that build trust and emotional bond with people - all through language and multi-modal behavior.
  • the psycho-social expression category of the intelligence may enable the PCD 100 to orchestrate its multi -modal outputs to "come to life", to enliven content, and to engage people as an emotionally attuned persona through an orchestra of speech, movement, graphics, sounds and lighting.
  • the architecture 200 may include modules corresponding to multi-modal machine perception technologies, speech recognition, expressive speech synthesis, as well as hardware modules that leverage cost effectiveness (i.e., components common to mobile devices). As illustrated in FIG. 1, there is provided one or more software subsystems within the PCD 100 and these one or more subsystems will be described in more detail below.
  • the psycho-social perception of the PCD 100 may include an aural perception that may be used to handle voice input, and a visual-spatial perception that may be used to assess the location of, capture the emotion of, recognize the identity and gestures of, and maintain interaction with users.
  • the aural perception of the PCD 100 may be realized using an array of microphones 202, one or more signal processing techniques such as 204 and an automatic speech recognition module 206. Further, the aural perception may be realized by leveraging components and technologies created for the mobile computing ecosystem with unique sensory and processing requirements of an interactive social robot.
  • the PCD 100 may include hardware and software to support multi-modal far-field interaction via speech using the microphone array 202 and noise cancelling technology using the signal processing module 204a, as well as third-party solutions to assist with automatic speech recognition module 206 and auditory scene analysis.
  • the PCD 100 may be configured to adapt to hear and understand what people are saying in a noisy environment. In order to do this, a sound signal may be passed through the signal processing module 204a before it is passed into the automatic speech recognizer (ASR) module 206. The sound signal is processed to isolate speech from static and dynamic background noises, echoes, motors, and even other people talking so as to improve the ASR's success rate.
  • ASR automatic speech recognizer
  • the PCD 100 may be configured to use an array of at least 4 MEMS microphones in a spatial configuration.
  • a sound time-of- arrival based algorithm (referred herein to as a beam-forming algorithm) may be employed to isolate sound in a particular direction.
  • the beam-forming algorithm may isolate sound coming from a particular spatial source.
  • the beam-forming algorithm may be able to provide information about multiple sources of sound by allowing multiple beams simultaneously.
  • a speech-non speech detection algorithm may be able to identify the speech source, and provide spatial localization of the speaker.
  • the beam-forming information may be integrated with a vision and awareness systems of the PCD 100 so as to choose the direction, as well as motor capability to turn and orient.
  • a 3D sensor may be used to detect location of a person's head in 3D space and accordingly, the direction may be communicated to the beam-forming algorithm which may isolate sounds coming from the sensed location before passing that along to the ASR module 206.
  • the PCD 100 may generate sound either by speaking or making noises.
  • the signal processing module 204a may be configured to prevent these sounds from being fed back through the microphone array 202 and into the ASR module 206.
  • signal processing module 204a may employ algorithms that may subtract out the signal being fed to the speaker from the signal being received by the microphone.
  • the PCD 100 may be configured to implement mechanical approach and signal processing techniques.
  • the PCD 100 may monitor different ports of a motor so as to address the noise generated from these parts of the motor.
  • the PCD 100 may be configured to mount the motor in an elastomeric material, which may absorb high frequencies that may be produced by armature bearings in the form of a whirring sound.
  • the motor may include brushes that may produce a hissing sound, which is only noticeable when the motor is rotating at high speeds. Accordingly, the PCD 100 may exhibits animations and movements at a relatively low speed so as to avoid the hissing sound.
  • the PCD 100 may be configured to implement a lower gear ratio and further, by reducing the speed of the motor so as to the hissing sound.
  • a lower quality PWM drives like those found in hobbyist servos, may produce a high-pitched whine.
  • the PCD 100 may be configured with good quality PWM drives so as to eliminate this part of the motor noise.
  • gears of the motor may cause a lower pitched grinding sound, which accounts for the majority of the motor noise.
  • the final gear drive may bear the most torque in a drive train, and is thus source of the most noise.
  • the PCD 100 may be configured to replace the final gear drive with a friction drive so as to minimize this source of noise.
  • the PCD 100 may be configured to employ signal processing techniques so as to reduce noise generated by the motor.
  • the microphone may be placed next to each motor so that noise signal may be subtracted from the signals in the main microphone array 202.
  • An output of the audio pipeline of the PCD 100 may feed the cleaned-up audio source into the ASR module 206 that may convert speech into text and possibly into alternative competing word hypotheses enriched with meaningful confidence levels, for instance using ASR's n-best output or word-lattices.
  • the textual representation of speech (words) may then be parsed to "understand" the user's intent and user's provided information and eventually transformed into a symbolic representation (semantics).
  • the ASR module 206 may recognize speech from users at a normal volume and at a distance that corresponds to the typical interpersonal communication distance. In an example, the distance may be near to 5-6 feet or greater dependent on a multitude of environmental attributes comprising ambient noise and speech quality.
  • the speech recognition range should cover an area of a typical 12 ft. by 15 ft. room.
  • the signal fed to the ASR module 206 will be the result of the microphone-array beam-forming algorithm and may come from an acoustic angle of about +/- 30 degrees around the speaker.
  • the relatively narrow acoustic angle may allow to actively reducing part of the background ambient noise and reverberation, which are the main causes of poor speech recognition accuracy.
  • the PCD 100 may proactively request the speaker to get closer (e.g., if the distance of the speaker is available as determined by the 3D sensor) or to speak louder, or both.
  • the PCD 100 may be configured to employ a real-time embedded ASR solution which may support large vocabulary recognition with grammars and statistical language models (SLMs). Further, the acoustic ASR models may be trained and/or tuned using data from an acoustic rig so as to improve speech recognition rates.
  • SLMs statistical language models
  • the PCD 100 may be configured to include a natural language processing layer that may be sandwiched between the ASR module 206 and an interaction system of the PCD 100.
  • the natural language processing layer may include natural language understanding (NLU) module that may take the text generated by the ASR and assign meaning to that text.
  • NLU natural language understanding
  • the NLU module may be configured to adapt to formats such as augmented backus-naur form (BNF) notation, java speech grammar format (JSGF), or speech recognition grammar format (SRGF), which may be supported by the above mentioned embedded speech recognizers.
  • BNF augmented backus-naur form
  • JSGF java speech grammar format
  • SRGF speech recognition grammar format
  • the PCD 100 may gradually transform traditional grammars into statistical grammars that may provide higher speech recognition and understanding performance, and allow for automatic data-driven adaptation.
  • the PCD 100 may be configured to design a structured interaction flow (based on the task network representation adopted for brain of the PCD 100) using multimodal dialog system user interface design principles for each interaction task.
  • the interaction flow may be designed to receive multimodal inputs (e.g. voice and touch) sequentially (e.g. one input at a time) or simultaneously (e.g. inputs may be processed independently in the order they are received) and to generate multimodal outputs (e.g. voice prompts, PCD's movements, display icons and text).
  • multimodal inputs e.g. voice and touch
  • inputs may be processed independently in the order they are received
  • multimodal outputs e.g. voice prompts, PCD's movements, display icons and text.
  • the PCD 100 may ask a yes/no question, an eye of the PCD 100 may morph into a question mark shape with yes/no icons that may be selected by one or more touch sensors.
  • the PCD 100 may be adapted to process natural language interactions that may be expressing the intent (e.g. Hey! Let's take a picture!).
  • interactions may be followed in a "directed dialog" manner. For instance, after the intent of taking a picture has been identified, the PCD 100 may ask directed questions, either for confirming what was just heard or asking for additional information (e.g. Do you want me to take a picture of you?).
  • the PCD 100 may be configured to employ one or more visual-spatial perception sensors such as a RGB camera 212, a depth camera 214 and other sensors so as to receive 2D vision, 3D Vision, or sense motion or color.
  • the PCD 100 may be configured to attain emotion perception of the user in the surrounding environment. For example, the PCD 100 may detect an expressed emotional state of each person.
  • the PCD 100 may include a visual-spatial perception subsystem to keep track of the moment-to-moment physical state of users and the environment. This subsystem may present the current state estimate of users to the other internal software modules as a dynamically updated, shared data structure called the Local Perceptual Space (LPS) 208.
  • LPS Local Perceptual Space
  • the LPS may be built by combining multiple sensory input streams in a single 3D coordinate system centered on a current location of the PCD 100, while sensors may be registered in 3D using kinematic transformations that may account for his movements.
  • the LPS 208 may be designed to maintain multiple 'levels' of information, each progressing to higher levels of detail and may require processing and key sensor inputs.
  • the LPS 208 levels may include:
  • This level may detect persons present in nearby surroundings.
  • the PCD 100 may calculate the number of nearby persons using the sensors.
  • a visual motion queue in the system may be employed to orient the PCD 100.
  • pyroelectric infrared (PIR) sensing and a simple microphone output may be integrated to implement wake up on the microcontroller so that the system can be in a low-power 'sleep' state, but may still respond to someone entering the room. This may be combined with visual motion cues and color segmentation models to detect the presence of people.
  • the detection may be integrated with the LPS 208.
  • the PCD 100 may be configured to locate the person in 3D and accordingly, determine the trajectory of the person using sensors such as vision, depth, motion, sound, color, features & active movement. For example, a combination of visual motion detection and 3D person detection may be used to locate the user (especially their head/face). Further, the LPS 208 may be adapted to include temporal models and other inputs to handle occlusions and more simultaneous people.
  • the system may learn (from moving regions and 3D) a color segmentation model (Naive Bayes) online from images to adaptively separate the users face and hands from the background and combine the results of multiple inputs with the spatial and temporal filtering of the LPS 208 to provide robust person location detection for the system.
  • a color segmentation model Naive Bayes
  • the PCD 100 may identify a known and an unknown person using vision sensors, auditory sensors or touch inputs for person ID.
  • one or more open source OpenCV libraries may be used for face identification module.
  • person tracking information and motion detection may be combined to identify a limited set of image regions that are candidates for face detection.
  • the PCD 100 may identify pose or posture of each person using visual classification (e.g., face, body pose, skeleton tracking, etc.), or touch mapping.
  • 3D data sets may be used to incorporate this feature with the sensor modalities of the PCD 100.
  • an open source gesture recognition toolkit may be adopted for accelerating custom gesture recognition based on visual and 3D visual feature tracking.
  • the PCD 100 may be configured to determine focus area so that the PCD 100 may point to or look at the determined focus area.
  • Various sensors may be combined into set of locations/directions for attention focus. For example, estimated location of people may generate a set of attention focus locations in the LPS 208. These may be the maximum likelihood locations for estimations of people, along with the confidence of the attention drive for the given location.
  • the set of focus points and directions are rated by confidence and an overall summary of LPS 208 data for use by other modules is produced.
  • the PCD 100 may use these focus points and directions to select gaze targets so as to address users directly and to 'flip its gaze' between multiple users seamlessly. Additionally, this may allow the PCD 100 robot to look at lower-confidence locations to confirm the presence of nearby users.
  • the PCD 100 may be configured to include activity estimation in the system or may incorporate more sensor modalities for tracking and identification by voice input as well as estimation of emotional state from voice prosody.
  • the LPS 208 may combine data from multiple inputs using grid- based particle filter models for processed input features.
  • the particle filters may provide support for robust online estimation of the physical state of users as well as a representation for multiple hypothesis cases when there is significant uncertainty that must to be resolved by further sensing and actions on the PCD's part.
  • the particle filtering techniques may also naturally allow a mixture of related attributes and sensory inputs to be combined into a single probabilistic model of physically measurable user state without requiring an explicit, closed form model of the joint distribution.
  • Grid based particle filters may help to fuse the inputs of 3D (stereo) and 2D (vision) sensing in a single coordinate system and enforce the constraint that the space may be occupied by only one object at any given time.
  • the PCD 100 may be configured to include heuristic proposal distributions and heuristic transition models that may help capture model user state over time even when the PCD 100 may not be looking at them directly. This may allow natural turn taking multi-party conversations using verbal and nonverbal cues with the PCD 100 and may easily fit within the particle filtering framework. As a result, this may allow combining robust statistical estimation with human-centric heuristics in a principled fashion.
  • the LPS 208 may learn prior probability distributions from repeated interaction and will adapt to the 'hot spots' in a space where people may emerge from hallways, doors, and around counters, and may use this spatial information to automatically target the most relevant locations for users.
  • the low-level image and signal processing code may be customized and based on quality open source tools such as OpenCV, the integrating vision toolkit (IVT), Eigen for general numerical processing and processor-specific optimization libraries
  • the PCD 100 may be configured to recognize from a video stream various levels of emotions such as joy, anger, contempt, disgust, fear, sadness, confusion, frustration, and surprise.
  • the PCD 100 may be configured to determine head position, gender, age, and whether someone is wearing glasses, has facial hair, etc.
  • the audio input system is focused on the user.
  • the PCD 100 may be configured to update the direction of the audio beam-forming function in real time for example, depending on robot movement, kinematics and estimated 3D focus of attention directions. This may allow the PCD 100 to selectively listen to specific 'sectors' where there is a relevant and active audio input. This may increase the reliability of ASR and NLU functions through integration with full 3D person sensing and focus of attention.
  • spatial probability learning techniques may be employed to help PCD 100 to engage more smoothly when users enter his presence.
  • the PCD 100 may remember the sequences of arrival and joint presence of users and accumulate these statistics for a given room. This may give the PCD 100 an ability to predict engagement rules with the users on room entry and thereby, may enable the PCD 100 to turn a sector for a given time period and even guess the room occupants. For example, this feature may provide the PCD 100 an ability to use limited predictions to support interactions like "Hey, Billy is that you?" before the PCD 100 may have fully identified someone entering the room. The PCD 100 may be turning to the spatial direction most likely to result in seeing someone at that time of day at the same time.
  • the PCD 100 may be a fully autonomous, artificial character.
  • the PCD 100 may have emotions, may select his own goals (based on user input), and execute a closed loop real-time control system to achieve those goals to keep users happy and healthy.
  • the psycho-social interaction module (PSIM) is a top layer of the closed loop, discrete time control system that may process outputs of the sensors and select actions for outputs and expressions.
  • Various supporting processes may proceed concurrently on CPU, and sensory inputs may be delivered asynchronously to decision-making module.
  • the "tick" is the decision cycle where the accumulated sensor information, current short-term memory /knowledge and task-driven, intentional state of the PCD 100 may be combined to select new actions and expressions.
  • FIG. 3A depicts architecture of the PSIM 300 in accordance with the exemplary and non-limiting embodiments.
  • the core of the PSIM 300 is an executive 302 that orchestrates the operation of the other elements.
  • the executive 302 is responsible for the periodic update of the brain of the PCD 100.
  • Each "tick" of the PSIM 300 may include a set of processing steps that move towards issuing new commands to the psycho-social expression module in a following fashion
  • Asynchronous inputs from the psycho-social perception 304 are sampled and updated into the black board 306 of the decision module.
  • the input may include information such as person locations, facial ID samples, and parsed NLU utterances form various users.
  • Results from any knowledge query operations are sampled into the blackboard 306 from the psycho-social knowledge base 308.
  • This may collect the results of deferred processing of query operations for use in current decisions.
  • Task Network 310 Think/Update
  • the executive 302 may run the "think" operation of the task network 310 and any necessary actions and decisions are made at each level.
  • the set of active nodes in the task network 310 may be updated during this process.
  • the task network 310 is a flexible form of state machine based logic that acts as a hierarchical controller for the robot's interaction.
  • Output Handling a. Outputs loaded into specific blackboard 306 frames are transferred to the psycho-social expression module 312.
  • the executive 302 may also provide the important service of asynchronous dispatch of the tasks in the task network 310. Any task in the network 310 may be able to defer computation to concurrent background threads by requesting an asynchronous dispatch to perform any compute intensive work. This feature may allow the task network 310 to orchestrate heavyweight computation and things like slow or even blocking network I/O as actions without "blocking" the decision cycle or changing the reactivity of decision process of the PCD 100.
  • the executive 302 may dispatch planning operations that generate new sections of the task network 310 and they will be dynamically attached to the executing tree to extend operation through planning capabilities as the products intelligence matures.
  • the task network 310 may be envisioned as a form of Concurrent Hierarchical Finite State Machine (CHFSM).
  • CHFSM Concurrent Hierarchical Finite State Machine
  • the task network design may enable clean, effective implementation and composition of tasks in a traditional programming language.
  • FIG. 4 illustrates a task network that shows a simplified version of a greeting interaction by the PCD 100.
  • the architecture of the task network 310 enable various expressions, movements, sensing actions and speech to be integrated within the engine, and thereby giving designers complete control over interaction dynamics of the PCD 100. As illustrated, a tiny portion of the network is active at any time during the operation.
  • the visual task network representation may be used to communicate in both a technical and design audience as part of content creation.
  • the PIR sensor of the PCD 100 has detected a person entering the area. The PCD 100 is aware of the fact that the PCD 100 may need to greet someone and starts the "Greet User" sequence.
  • This "Greet User” sequence may initialize tracking on motion cues and then say "Hello", while updating tracking for the user as they approach.
  • the PCD 100 may keep updating the vision input to capture a face ID of the User.
  • the ID says it's Jane so the PCD 100 moves on to the next part of the sequence where the PCD 100 may form an utterance to check in on how Jane is doing and opens his ASR NLU processing window to be ready for responses.
  • a knowledge query may be used to classify the utterance into "Good” or "Bad” and the PCD 100 may form an appropriate physical and speech reaction for Jane to complete his greeting.
  • the network may communicate the concept of how the intelligence works. [00120]
  • the PCD 100 may be configured to include an engine that may complement the sociable nature of the PCD 100.
  • the engine may include a tagging system for modifying the speech output.
  • the engine may allow controlling the voice quality of the PCD 100.
  • recordings may be done by a voice artist so as to control voice of the PCD 100.
  • the engine may include features such as high quality compressed audio files for embedded devices and a straightforward pricing model.
  • the PCD 100 may include an animation engine for providing animations for physical joint rotations; graphics, shape, texture, and color; LED lighting, or mood coloring; timing; and any other expressive aspect of the PCD 100.
  • Animations can be accompanied by other expressive outputs such as audio cues, speech, scent, etc.
  • the animation engine may then play all or parts of that animation at different speeds, transitions, and between curves, while blending it with procedural animations in real-time. This engine may flexibly accommodate different PCD models, geometry, and degrees of freedom.
  • the PCD 100 may be configured to employ an algorithm that may orient PCD 100 towards points in 3D space procedurally.
  • the eyes of the PCD 100 may appear to be fixed on a single point while the body of the PCD 100 may be playing a separate animation, or the eye may lead while the body may follow to point in a particular direction.
  • a closed-form, geometric solver to compute PCD's look-at target may be used. This target pose is then fed into a multi- target blend system which may include support for acceleration constraints, additive blending/layering, and simulated VOR (vestibule- ocular reflex).
  • the animation engine may include a simulator that may play and blend animations and procedural animations virtually.
  • the simulator may simulate sensory input such as face detection.
  • a physical simulation into the virtual model may be built, considering the mass of the robot, the power of the motors, and the robot's current draw limits to validate and test
  • the graphical representation of the personal may be constructed using joints to allow it to morph and shape itself into different objects.
  • An eye graphics engine may use custom animation files to morph the iris into different shapes, blink, change its color, and change the texture to allow a full range of expression.
  • the PCD API may support the display of graphics, photos, animations, videos, and text in a 2D scene graph style interface.
  • the PCD 100 is a platform, based on a highly integrated, high-performance embedded Linux system, coupled with an ecosystem of mobile device "companion” apps, a cloud-based back-end, and an online store with purchasable content and functionality.
  • the PCD SDK may take advantage of Javascript and the open language of the modern web development community so as to provide an open and flexible platform on which third party developers can add capabilities with a low learning curve. All PCD apps, content and services created by the PCD SDK are available for download from the PCD App Store. All of PCD's functions, including TTS, sensory awareness, NLU, animations, and the others will be available through the PCD API.
  • This API uses NodeJS, a JavaScript platform that is built on top of V8, Chrome's open source JavaScript engine. NodeJS uses an event driven model that is fast and efficient and translates well into robotics programming. NodeJS comes with a plethora of functionality out-of-the-box and is easily extensible as add-ons.
  • PCD's API will be a NodeJS add-on. Because add-ons are also easily removed or modified, the ways may be controlled in which developers are able to interact with PCD. For example, developers may create an outbound socket, but also limit the number of outbound connections.
  • a sophisticated cloud- based back end platform may be used to support PCD's intelligence, to retrieve fresh content and to enable people to stay connected with their family.
  • the PCD device in the home may connect to PCD servers in the cloud via Wi-Fi. access to PCD cloud servers relies on highly secure and encrypted web communication protocols.
  • Various applications may be developed for iOS, Android and HTML5 that may support PCD users, caregivers and family members on the go. With these mobile and web apps, the PCD 100 may always be with you, on a multitude of devices, providing assistance and all the while learning how to better support your preferences, needs and interests. Referring to FIG.
  • the PCD 100 may be configured to mirror in the cloud all the data that may make the PCD 100 unique to his family, so that users can easily upgrade to future PCD robot releases and preserve the persona and relationships they've established.
  • PCD's servers may be configured to collect data in the cloud storage 214 and compute metrics from the PCD robot and other connected devices to allow machine learning algorithms to improve the user models 216 and adapt the PCD persona model 218.
  • the collected data at the cloud storage 214 may be used to analyze what PCD features are resonating best with users, and to understand usage patterns across the PCD ecosystem, in order to continually improve the product offering.
  • a cloud-based back end platform may contain a data base system to be used for storage and distribution of data that is intended to be shared among a multitude of PCSs.
  • the cloud-based back end platform may also host service applications to support the PCDs in the identification of people (for example Voice ID application) and the gathering of personal multi-modal data through interworking with social networks.
  • the one or more PCD 100 may be configured to communicate with a cloud-based server back-end using RESTful- based web services using compressed JSON.
  • a zero-configuration network protocol along with an OAUTH authentication model may be used to validate identity.
  • a security framework may be applied to provide additional security protocols around roles and permissions, such as the ShiroTM security framework offered by ApacheTM among others. All sensitive data will be sent over SSL. On the server side, data using a strict firewall configuration employing OAUTH to obtain a content token may be secured. In addition, all calls to the cloud-based servers may be required to have a valid content token.
  • a server API to include a web service call to get the latest content for a given PCD device is used.
  • This web service may provide a high level call that returns a list of all the pending messages, alerts, updated lists (e.g., shopping, reminders, check-ins and the like) and other content in a concise, compact job manifest.
  • the PCD robot may then retrieve the pending data represented in that manifest opportunistically based on its current agenda.
  • PCD's truth is in the cloud, meaning that the master record of lists, reminders, check-ins and other application state is stored on the PCD Servers.
  • the API may be called frequently and the content collected opportunistically (but in a timely manner).
  • a functionality that is offloaded to the cloud and will not return results in real time may be used. This may tie in closely with the concept of the agenda-based message queuing discussed above.
  • it may involve a server architecture that may allow requests for services to be made over the RESTful web service API and dispatch jobs to application servers.
  • Amazon Simple Workflow (SWF) or similar workflow may be used to implement such a system along with traditional message queuing systems.
  • the content that may require updating may include the operating system kernel, the firmware, hardware drivers, V8 engine or companion apps of the PCD 100. Updates to this content may be available through a web service that returns information about the types of updates available and allows for the request of specific items. Since PCD will often need to be opportunistic to avoid disrupting a user activity the robot can request the updates when it can apply them. Rather than relying on the PCD robot to poll regularly for updates, the availability of certain types of updates may be pushed to the robot.
  • the PCD 100 may send log information to the servers.
  • the servers may store this data in the appropriate container (SQL or NoSQL).
  • Tools such as Hadoop (Amazon MapReduce) and Splunk may be used to analyze data. Metrics may also be queryable so that the report may be run on how people interact with and use the PCD 100. The results of these analyses may be used to adjust parameters on how PCD learns, interacts, and behaves, and also on what features may be required in the future updates.
  • various training systems and feedback loop may be developed to allow the PCD robot and cloud-based systems to continuously improve.
  • the PCD robots may collect information that can be used to train machine learning algorithms. Some amount of machine learning may occur on the robot itself, but in the cloud, data may be aggregated from many sources to train classifiers.
  • the cloud- based servers may allow for ground truth to be determined by sending some amount of data to human coders to disambiguate content with low probability of being heard, seen or understood correctly. Once new classifiers are created they may be sent out through the Update system discussed above. Machine learning and training of classifiers/predictors may span both supervised, unsupervised or reinforcement-learning methods and the more complex human coding of ground truth.
  • Training signals may include knowledge that the PCD robot has accomplished a task or explicit feedback generated by the user such as voice, touch prompt, a smiling face, gesture, etc. Accumulating images from the cameras that may include a face and audio data may be used to improve the quality of those respective systems in the cloud.
  • a telepresence feature including a video chat option may be used.
  • a security model around the video chat to ensure the safety of users is enabled.
  • a web app and also mobile device apps that utilize the roles, permissions and security infrastructure to protect the end users from unauthorized use of the video chat capabilities may be used.
  • PCD's software system The high-level capabilities of PCD's software system are built on a robust and capable Embedded Linux platform that is customized with key libraries, board support, drivers and other dependencies to provide our high-level software systems with a clean, robust, reliable development environment.
  • the top-level functional modules are realized as processes in our embedded Linux system.
  • the module infrastructure of the PCD is specifically targeted at supporting flexible scripting of content, interactions and behavior in JavaScript while supporting computationally taxing operations in C++ and C basing on language libraries. It is built on the V8 JavaScript engine and the successful Node.js platform with key extensions and support packaged as C++ modules and libraries.
  • FIG. 5 A illustrates hardware architecture of the PCD 100 that may be engineered to support the sensory, motor, connectivity, power and computational needs of the one or more capabilities of the PCD 100.
  • one or more hardware elements of the PCD 100 are specializations and adaptations of core hardware that may have used in high-end tablets and other mobile devices.
  • An overall physical structure of the PCD 100 may also be referred herein to a 3 -ring Zetatype.
  • Such type of physical structure of the PCD 100 may provide the PCD 100 a clean, controllable and attractive line of action.
  • the structure may be derived from the principles that may be used by character animators to communicate attention and emotion.
  • the physical structure of the PCD 100 may define the boundaries of the mechanical and electrical architecture based on the three ring volumes, ranges of motion and necessary sensor placement.
  • the PCD 100 may be configured to include three-axes for movement, one or more stereo vision camera 504, a microphone array 506, touch sensing capabilities 508 and a display such as a LCD display 510.
  • the three axes for movement may support emotive expression and the ability to direct sensors and attend users in a natural way.
  • the stereo vision camera 504 may be configured to support 3D location and tracking of users, for providing video input, camera snaps and the like.
  • the microphone array 506 may support beam-formed audio input to maximize ASR performance.
  • the touch sensing capabilities 508 may enable an alternative interaction to make the PCD 100 like a friend, or as a form of user interface.
  • the LCD display 510 may supports emotive expression as well as dynamic information display. Ambient LED lighting may also be included.
  • the hardware architecture 500 may be configured to include an electrical architecture that may be based on a COTS processor from the embedded control and robotics space and combined with high end application processor from the mobile devices and tablet space.
  • the embedded controller is responsible for motion control and low-level sensor aggregation, while the majority of the software stack runs on the application processor.
  • the electrical boards in the product are separated by function for VI design and this may provide a modularity to match the physical structure of the robot while mitigating the need for design changes on one board from propagating into larger design updates.
  • the electrical architecture may include a camera interface board that may integrate two mobile-industry based low-resolution MIPI camera modules that may support hardware synchronization so that capture images may be registered in time for the stereo system.
  • the stereo cameras are designed to stream video in continuous mode.
  • the camera interface board may support a single RGB application camera for taking high resolution photos and video conference video quality.
  • the RGB application camera may be designed to use for specific photo taking, image snaps and video applications.
  • the hardware architecture may include a microphone interface board that may carry the microphone array 506, an audio processing and codec support 514 and sends a digital stream of audio to a main application processor 516.
  • the audio output from our codec 514 may be routed out as speakers 518 are in a separate section of the body for sound isolation.
  • the hardware architecture may include a body control board 520 that may be integrated in a middle section of the body and provides motor control, low- level body sensing, power management and system wakeup functionality for the PCD 100.
  • the body control board 520 may be built around an industry standard Cortex- M4F microcontroller platform.
  • the architecture 500 may include an application processor board that may provide the core System On Chip (SoC) processor and tie together the remainder of the robot system.
  • SoC System On Chip
  • the board may use a System On Module (SoM) to minimize the time and expense of developing early prototypes.
  • the application processor board may include the SoC processor for cost reduction and simplified production.
  • the key interfaces of the application processor board may include interface for supporting MIPI cameras, the display, wireless communications and high performance audio.
  • the hardware architecture 500 may be configured to include power management board 522 that may address the power requirements of the PCD 100.
  • the power management board 522 may include power regulators, battery charger and a battery.
  • the power regulators may be configured to regulate the input power so that one or more elements or boards of the hardware architecture 500 may receive a regulated power supply.
  • the battery charger may be configured to charge the battery so as to enable the PCD 100 to operate for long hours.
  • the PCD 100 may have a charging dock/base/cradle, which will incorporate a wall plug and a blind mate charging connector such that the PCD 100, when placed on the base, shall be capable of charging the internal battery.
  • FIG. 6A illustrates an exemplary design of the PCD 100 that may be configured to include the required software and hardware architecture so as to provide various features to the users in a friendly manner.
  • the mechanical architecture of the PCD 100 has been optimized for quiet grace and expressiveness, while targeting a cost effective bill of materials. By carefully selecting the best elements from a number of mature markets and bringing them together in a unique combination for the PCD 100, a unique device is produced.
  • the mechanical architecture depicts placement of various boards such as microphone board, main board, battery board, body control board, camera board at an exemplary position within the PCD 100.
  • one or more vents are provided in the design of the PCD 100 so as to appropriately allow air flow to provide cooling effect.
  • PCD utilizes a plurality of sensors in communication with a processor to sense data. As described below, these sensors operate to acquire all manner of sensory input upon which the processor operates via a series of programmable algorithms to perform tasks. In fulfillment of these tasks, PCD 100 makes use of data stored in local memory forming a part of PCD 100 and accesses data stored remotely such as at a server or in the cloud such as via wired or wireless modes of communication. Likewise, PCD 100 makes use of various output devices, such as touch screens, speakers, tactile elements and the like to output information to a user while engaging in social interaction. Additional, non-limiting disclosure detailing the operation and interoperability of data, sensors, processors and modes of communication regarding a companion device may be found in published U.S. Application 2009/0055019 Al, the contents of which are incorporated herein by reference.
  • the embodiments described herein present novel and non-obvious embodiments of features and functionality to which such a companion device may be applied, particularly to achieve social interaction between a PCD 100 and a user. It is understood, as it is known to one skilled in the art, that various forms of sensor data and techniques may be used to assess and detect social cues from a physical environment. Such techniques include, but are not limited to, voice and speech recognition, eye movement tracking, visual detection of human posture, position, motion and the like. Though described in reference to such techniques, this disclosure is broadly drawn to encompass any and all methods of acquiring, processing and outputting data by a PCD 100 to achieve the features and embodiments described herein.
  • PCD 100 may be expressed in a purely physical embodiment, as a virtual presence, such as when executing on a mobile computational device like a mobile phone, PDA, watch, etc., or may be expressed as a mixed mode physical/virtual robot.
  • the source information for driving a mixed mode, physical, or virtual PCD may be derived as if it is all the same embodiment. For example, source information as might be entered via a GUI interface and stored in a database may drive a mechanical PCD as well as the animation component of a display forming a part of a virtual PCD.
  • source information comprises a variety of sources, including, outputs from AI systems, outputs from real-time sensing; source animation software models; kinematic information models, and the like.
  • data may be pushed from a single source regarding behavior of a purely virtual character (at the source) and then can output the physical as well as the virtual modes for a physical PCD . In this manner, embodiments of a PCD may span the gamut from purely physical to entirely virtual to a mixed mode involving some of both.
  • PCD 100 possesses and is expressed as a core persona that may be stored in the cloud, and that can allow what a user does with the physical device to be remembered and persist, so that the virtual persona can remember and react to what is happening with the physical device, and vice versa.
  • PCD 100 incorporates a generally tripartite design comprising three distinct body segments separated by a generally circular ring. By rotating each body segment about a ring, such as via internal motors (not shown), PCD 100 is configured to alter its shape to achieve various form factors as well as track users and other objects with sensors 102, 104, 106, 108, 112.
  • attributes of PCD 100 may be statically or dynamically configured including, but not limited to, a shape of touch screen 104, expressive body movement, specific expressive sounds and mnemonics, specific quality of prosody and vocal quality when speaking, the specifics of the digital interface, the "faces" of PCD 100, a full spectrum LED lighting element, and the like.
  • the PCD 100 may be configured to employ multi-modal user interface wherein many inputs and outputs may be active simultaneously. Such type of concurrent interface may provide a robust user experience .
  • one or more of the user interface inputs or outputs might be compromised depending upon the environment resulting in a relatively lesser optimal operation of the PCD 100. Operating the various modes simultaneously may help fail-safe the user experience and interaction with the device to guarantee no loss of communication.
  • the PCD 100 may be configured to process one or more inputs so as to provide enriching experience to the user of the PCD 100.
  • the PCD 100 may be configured to recognize speech of the user. For example, the PCD 100 identify a "wake up word” and/or other mechanism from the speech so as to reduce "false positive" engagements.
  • the PCD 100 may be configured to recognize speech in a near-field range of N x M feet, where N and M may be determined by the sound quality of speech and detection sensitivity of the PCD.
  • the PCD 100 may be configured to recognize speech with a far-field range in excess of N feet covering at least the area of 12 feet by 15 feet room size.
  • PCD 100 may be configured to identify sounds other than spoken language.
  • the PCD may employ a sound signature database configured with sounds that the PCD can recognize and act upon.
  • the PCD may share the content of this database with other PCD devices via direct or cloud based communications.
  • the sounds other than the spoken language may comprise sounds corresponding to breaking glass, door bell, phone ringing, a person falling down, sirens, gun shots, audible alarms, and the like.
  • the PCD 100 may be configured to "learn" new sounds by asking a user to identify the source of sounds that do not match existing classifiers of the PCD 100.
  • the device may be able to respond to multiple languages.
  • the PCD 100 may be configured to respond to the user outside of the near-field range with the wake-up word. The user may be required to get into the device's field of vision.
  • the PCD 100 may have touch sensitive areas on its surface that may be used when the speech input is compromised for any reason. Using these touch inputs, the PCD 100 may ask yes/no questions or display options on the screen and may consider user's touch on the screen as inputs from the user. In some embodiments, the PCD 100 may use vision and movement to differentiate one user from another, especially when two or more users are within the field of vision. Further, the PCD 100 may be capable of interpreting gross skeletal posture and movement, as well as some common gestures, within the near-field range. These gestures may be more oriented toward social interaction than device control. In some embodiments, the PCD 100 may be configured to include cameras so as to take photos and movies.
  • the camera may be configured to take photos and movies when the user is within a predetermined range of the camera.
  • the PCD 100 may be configured to support video conferencing (pop-ins). Further, the PCD 100 may be configured to include a mode to eliminate "red eye" when the camera is in photo mode.
  • the PCD 100 may be configured to determine if it is being picked up, carried, falling, and the like. In addition, the PCD 100 may be configured to implement a magnetometer. In some embodiments, the PCD 100 may determine ambient lighting levels. In addition, the PCD 100 may adjust the display and accent lighting brightness levels to an appropriate level based on ambient light level. In some embodiments, the PCD 100 may have the ability to use GPS to approximate the location of a device. The PCD 100 may determine relative location within a residence. In some embodiments, the PCD 100 may be configured to include one or more passive IR motion detection sensors (PIR) to aid in gross or far field motion detection.
  • PIR passive IR motion detection sensors
  • the PCD 100 may include at least one thermistor to indicate ambient temperature of the environment.
  • the PCD 100 may be configured to speak "one voice" English to a user in an intelligible, natural voice.
  • the PCD 100 may be configured to change the tone of the spoken voice to emulate the animated device emotional state (sound sad when PCD 100 is sad, etc.).
  • the PCD 100 may be configured to include at least one speaker capable of playing speech, high fidelity music and sound effects.
  • the PCD 100 may have multiple speakers, one for speech, one for music, and/or additional speakers for special audible signals and alarms.
  • the speaker dedicated for speech may be positioned towards the user and tuned for voice frequency response.
  • the speaker dedicated to music may be tuned for full frequency response.
  • the PCD 100 may be configured to have a true color, full frame rate display.
  • the displayed active image may be (masked) round at least 4-1 ⁇ 2" in diameter.
  • the PCD 100 may have a minimum of 3 degrees of freedom of movement, allowing for both 360-degree sensor coverage of the environment and a range of humanlike postures and movements (expressive line of action).
  • the PCD 100 may be configured to synchronize the physical animation to the sound, speech, accent lighting, and display graphics. This synchronization may be close enough as to be seamless to human perception.
  • the PCD 100 may have designated areas that may use accent lighting for both ambient notification and social interaction.
  • the accent lighting may help illuminating the subject in a photo when the camera of the PCD 100 is in photo or movie capture mode.
  • the PCD 100 may have camera flash that will automatically illuminate the subject in a photo when the camera is in photo capture mode. Further, it may be better for the accent lighting to accomplish the illumination of the subject.
  • the PCD 100 may have a mode to eliminate "red eye" when the camera is in photo capture mode.
  • the PCD 100 may identify and track the user.
  • the PCD 100 may be able to notice when a person has entered a near-field range.
  • the near-field range may be of 10 feet.
  • the PCD 100 may be able to notice when a person has entered a far-field range.
  • the far-field range may be of 10 feet.
  • the PCD 100 may identify up to 5 different users with a combination of video (face recognition), depth camera (skeleton feature matching), and sound (voice ID).
  • a "learning" routine is used by the PCD 100 to learn the users that the PCD 100 will be able to recognize.
  • the PCD 100 may locate and track users in a full 360 degrees within a near-field range with a combination of video, depth camera, and auditory scene analysis. In some embodiments, the PCD 100 may locate and track users in a full 360 degrees within a far- field range of 10 feet. In some embodiments, the PCD 100 may maintain an internal map of the locations of different users relative to itself whenever users are within the near-field range. In some embodiments, the PCD 100 may degrade functionality level as the user gets further from the PCD 100. In an embodiment, a full functionality of the PCD 100 may be available to users within the near-field range of the PCD 100. In some embodiments, the PCD 100 may be configured to track mood and response of the users.
  • the PCD 100 may determine the mood of a user or group of users through a combination of video analysis, skeleton tracking, speech prosody, user vocabulary, and verbal interrogation (i.e., device asks "how are you?" and interprets the response).
  • the PCD 100 may be programmed with human social code to blend emotive content into its animations.
  • programmatic intelligence should be applied to the PCD 100 to adjust the emotive content of the outputs appropriately in a completely autonomous fashion, based on perceived emotive content of user expression.
  • the PCD 100 may be programmed to attempt to improve the sensed mood of the user through a combination of speech, lighting, movement, and sound effects.
  • the PCD social code may provide for the ability to build rapport with the user. i.e. mirror behavior, mimic head poses, etc.
  • the PCD 100 may be programmed to deliver proactively customized Internet content comprising sports news and games, weather reports, news clips, information about current events, etc., to a user in a social, engaging method based on learned user preferences and/or to develop its own preferences for sharing that information and data as a way of broadening the user's potential interests.
  • the PCD device may be programmed with the capability of tailoring both the type of content and the way in which it is communicated to each individual user that it recognizes.
  • the PCD device may be programmed with the capability of improving and optimizing the customization of content/delivery to individual users over time based on user preferences and user reaction to and processing habits of the delivered Internet content.
  • the PCD may be programmed to engage in a social dialogue with the user to confirm that the delivered information was understood by the user.
  • the PCD 100 may be configured to manage and monitor activities of the user.
  • the communication devices 122 in conjunction with the service may, at the user's request, create and store to-do, grocery, or other lists that can be communicated to the user once they have left for the shopping trip.
  • the PCD 100 may push the list to the user (via the service) to a mobile phone as a text (SMS) message, or pulled by a user of either our mobile or web app, upon request.
  • SMS text
  • the user may make such a request via voice on the PCD 100, or via the mobile or web app through the service.
  • the PCD 100 may interact with user to manage lists (i.e., removing items that were purchased/done/no longer needed, making suggestions for additional list items based on user history, etc.).
  • the PCD 100 may infer the need to add to a list by hearing and understanding key phrases in ambient conversation (i.e., device hears "we are out of coffee” and asks the user if they would like coffee added to the grocery list).
  • the PCD 100 may be configured to provide user-generated reminders or messages at correct times.
  • the PCD 100 may be used for setting up conditions for delivering reminders at the correct times.
  • the conditions for reminders may include real time conditions such as "the first time you see me tomorrow morning", or "the next time my daughter is here", or even "the first time you see me after noon next Tuesday” and the like.
  • the PCD 100 may engage the user (from a "look-at” as well as a body language/expression perspective) and deliver the reminder in an appropriate voice and character.
  • the PCD 100 may analyze mood content of a reminder and use this information to influence the animation/lighting/delivery of that reminder. In other embodiments, the PCD 100 may follow up with the user after the PCD 100 has delivered a reminder by asking the user if they performed the reminded action.
  • the PCD 100 may monitor absence of the user upon a request that may be given by the user. For example, the user may tell the PCD 100 when and why they are stepping away (e.g., "I'm going for a walk now"), and the expected duration of the activity so that the PCD 100 may ensure that the user has returned within a desired/requested timeframe. Further, the PCD 100 may notify emergency contacts as have been specified by the user for this eventuality, if the user has not returned within the specified window. The PCD 100 may notify the emergency contacts through text message and/or through a mobile app.
  • the PCD 100 may recognize the presence and following up on the activity (i.e., asking how the activity was, or other questions relevant to the activity) when the user has returned. Such type of interaction may enable a social interaction between the PCD 100 and the user, and also enable collection of information about the user for the learning database.
  • the PCD 100 may show check-out/check-in times and current user status to such family/friends as have been identified by the user for this purpose. This may be achieved through a mobile app.
  • the PCD 100 may be capable of more in-depth activity monitoring/patterning/reporting .
  • the PCD 100 may be configured to connect to external networks through one or more data connections.
  • PCD 100 may have access to a robust, high bandwidth wireless data connection such as WiFi Data Connection.
  • the PCD 100 may implement 802.1 In WiFi specification with a 2x2 two stream MIMO configuration in both 2.4GHZ and 5GHz bands.
  • the PCD 100 may connect to other Bluetooth devices (medical sensors, audio speakers, etc.).
  • the PCD 100 may implement Bluetooth 4.0 LE (BLE) specification.
  • the BLE enabled PCD 100 device may be configured to customize its UUID to include and share multi -modal user data with other BLE enabled PCD 100 devices.
  • the PCD 100 may have connectivity to3G/4G/LTE or other cellular networks.
  • a multitude of PCD 100 devices may be configured in a meshed network configuration using ad-hoc networking techniques to allow for direct data sharing and communications without the need for a cloud based service.
  • data to be shared among multiple PCD 100 devices may be uploaded and stored in a cloud based data base / data center where it may be processed and prepared for broadcasting to a multitude of PCD 100 devices.
  • a cloud based data service may be combined with a meshed network arrangement to provide for both local and central data storage, sharing, and distribution for a multitude of PCD 100 devices in a multitude of locations.
  • a companion application may be configured to connect with the PCD 100.
  • the companion application may be available on the following platforms: iOS, Android, and Web.
  • the companion application may include an intuitive and easy to use user interface (UI) that may not require more than three interactions to access a feature or function.
  • the companion application may provide user an access to a virtual counterpart of the PCD 100 so that the user may access this virtual counterpart to interact with the real PCD 100.
  • the user may be able to access information such as shopping lists, activity logs of the PCD 100 through the companion application.
  • the companion application may present the user with longitudinal reports of user activity local to the PCD 100.
  • the companion application may connect the user via video and audio to the PCD 100.
  • the companion application may asynchronously alert the user to certain conditions (e.g., a local user is later than expected by a Check-In, there was a loud noise and local user is unresponsive, etc.).
  • an administration/deployment application to allow connectivity or control over a family of devices may be available on a web platform.
  • An UI of the administration application may enable hospital/caregiver administrators or purchasers who may need quick access to detailed reports, set-up, deployment, and/or support capabilities.
  • a group may be able to access information stored across a managed set of PCD 100 devices using the administration application.
  • the administration application may asynchronously alert an administrator to certain conditions (e.g., local user is later than expected by a Check-In, there was a loud noise and local user is unresponsive, etc.).
  • the administration application may broadcast messages and reminders across a subset or all of its managed devices.
  • a support console may allow personnel of the PCD 100 to monitor/support/diagnose/deploy one or more devices.
  • the support console may be available on web platform.
  • the support console may support a list view of all deployed PCD devices that may be identified by a unique serial number, owner, institutional deployment set, firmware and application version numbers, or registered exception.
  • the support console may support interactive queries, with tags including serial number, owner, institutional deployment set, firmware and application version numbers, or registered exception. Further, the support console may support the invocation and reporting of device diagnostics.
  • the support console may assist in the deployment of new firmware and software versions (push model). Further, the support console may assist in the deployment of newer NLUs, new apps, etc.
  • the support console may support customer support scenarios, broadcasting of messages to a subset or all deployed devices to communicate things like planned downtime of the service, etc.
  • the support console may need to support access to a variety of on-device metrics, including (but not exclusive to): time spent interacting with the PCD 100, time breakdown across all the apps/services, aggregated hit/miss metrics for audio and video perception algorithms, logged actions (to support data mining, etc.), logged exceptions, alert thresholds (e.g. at what exception level should the support console scream at you?), and others.
  • PCD 100 may engage in teleconferencing.
  • teleconferencing may commence to be executed via a simple UI, either with touch of the body of PCD 100 or touch screen 104 or via voice activation such as may be initiated with a number of phrases, sounds and the like.
  • voice activation such as may be initiated with a number of phrases, sounds and the like.
  • calls may also be initiated as an output of a Call Scheduling/Prompting feature.
  • PCD 100 may function as a phone using microphone 112 and speaker 110 to receive and output audio data from a user while using a Wi-Fi connection, Bluetooth, a telephony connection or some combination thereof to affect phone functionality.
  • Calls may be either standard voice calls or contain video components.
  • PCD 100 may function as a cameraman for the PCD 100 end of the conversation.
  • PCD 100 may be placed in the middle of a table or other social gathering point with a plurality of users, such as a family, occupying the room around PCD 1000, all of whom may be up, moving, and active during the call.
  • PCD 100 may point a camera 106 in a desired place.
  • PCD 100 may utilize sound localization and face tracking to keep camera 106 pointed at the speaker/user.
  • PCD 100 may be directed (e.g., "PCD, look at Ruby") by people/users in the room.
  • a remote person may be able to specify a target to be tracked via a device, and the PCD 100 will autonomously look at and track that target.
  • what camera 106 receives as input is presented to the remote participant if, for example, they are using a smart phone, laptop, or another device capable of displaying video.
  • PCD 100 may also function as the "interpreter" for the person on the other end of the link, much like the paradigm of a United Nations interpreter, by receiving voice input, translating the input via a processor, and outputting the translated output. If there is a screen available in the room with PCD 100, such as a TV, iPad, and the like, PCD 100 may send, such as via Bluetooth or Wi-Fi, audio and, if available, video of the remote participant to be displayed on this TV screen. If there is no other screen available, PCD 100 may relay the audio from the remote participant, but no remote video may be available. In such an instance, PCD 100 is merely relaying the words of the remote participant.
  • PCD 100 may send, such as via Bluetooth or Wi-Fi, audio and, if available, video of the remote participant to be displayed on this TV screen. If there is no other screen available, PCD 100 may relay the audio from the remote participant, but no remote video may be available. In such an instance, PCD 100 is merely relaying the words of the remote participant.
  • PCD 100 may be animated and reactive to a user, such as by, for example, blinking and looking down if the remote participant pauses for a determined amount of time, or doing a little dance or "shimmy” if PCD 100 senses that the remote participant is very excited.
  • PCD 100 may be an avatar of the person on the remote end of the link.
  • an eye or other area displayed on touch screen 104 may morph to a rendered version (either cartoon, image based or video stream, among other embodiments) of the remote participant's face.
  • the rendering may be stored and accessible to PCD 100.
  • PCD 100 may also retrieve data associated with and describing a remote user and imitate motions/non-verbal cues of remote user to enhance the avatar experience.
  • either remote or local participants can cue the storage of still images, video, and audio clips of the participants and PCDs 100 camera view, or notes (e.g., "PCD, remember this number"). These tagged items will be appropriately meta- tagged and stored in a PCD cloud.
  • PCD 100 may also help stimulate remote interaction upon request. For example, a user may ask PCD 100 to suggest a game, which will initiate Connected Gaming mode, described more fully below, and suggest games until both participants agree. In another example, a user may also ask PCD 100 for something to talk about. In response, PCD 100 may access "PCD In The Know" database targeted at common interests of the conversation participants, or mine a PCD Calendar for the participants for an event to suggest that they talk about (e.g., "Grandma, tell Ruby about the lunch you had with your friend the other day").
  • PCD In The Know targeted at common interests of the conversation participants, or mine a PCD Calendar for the participants for an event to suggest that they talk about (e.g., "Grandma, tell Ruby about the lunch you had with your friend the other day").
  • PCD 100 may suggest calls based on calendar availability, special days, and/or knowledge of presence at another end of the link (e.g., "your mom is home right now, and it's her birthday, would you like to call her?").
  • the user may accept the suggestion, in which case a PCD Call app is launched between PCD 100 and the remote participant's PCD 100, phone, smart device, or Skype account.
  • a user may also accept the suggestion by asking PCD 100 to schedule the call later, in which case a scheduling app adds it to the user's calendar.
  • a call answering and messaging functionality may be implemented with PCD 100. This feature applies to voice or video calls placed to PCD 100 and PCD 100 will not perform call management services for other cellular connected devices.
  • FIG. 7 there is illustrated a flowchart 700 of an exemplary and non-limiting embodiment. As illustrated, at step 702, when a call is placed to PCD 100, PCD 100 may announce the caller to the people in the room. If no one is in the room, PCD 100 may check the user's calendar and, if it indicates that they are not at home, PCD 100 may send the call directly to a voicemail associated with PCD 100, at step 704.
  • PCD 100 will, at step 706, use louder sounds (bells, rings, shouts?) to get the attention of a person in the house. [00199] Once PCD 100 has his user's attention, at step 708, PCD 100 may announce the caller and ask if they would like to take the call. At step 710, a user may respond with a simple touch interface or, ideally, with a natural language interface. If the answer is yes, at step 712, PCD 100 connects the call as described in the Synchronous On-Demand Multimodal Messaging feature. If the answer is no, at step 714, the call is sent to PCD 100 voicemail.
  • louder sounds bells, rings, shouts
  • PCD 100 may greet them and ask them to leave a message.
  • a voice or voice/video (if caller is using Skype or equivalent) message may be recorded for playback at a later date.
  • PCD 100 may, at step 716, inform them of the message (either verbally with "you have a message", or nonverbally with lighted pompom, etc.) and ask them if they would like to hear it. If yes, PCD 100 may either play back audio or play audio/video message on a TV/tablet/etc. as described above.
  • the user may have the option of saving the message for later. He can either tell PCD 100 to ask again at a specific time, or just "later", in which case PCD 100 will ask again after a predetermined amount of time.
  • PCD 100 may direct the call to voicemail and notify the user that an unidentified call from X number was received, and play back the message if one was recorded. The user may then instruct PCD 100 to effectively block that number from connection/voicemail going forward. PCD 100 may also ask if the user wishes to return the call either synchronously or asynchronously. If user accepts, then PCD 100 launches appropriate messaging mode to complete user request. In some embodiments, PCD 100 may also provide Call Manager functionality for other cellular or landline devices in the home. In yet other embodiments, PCD 100 may answer the call and conversationally prompt the caller to leave a message thus playing role of personal assistant.
  • PCD 100 may incorporate a Connected Story Reading app to enable a remote participant to read a story "through" PCD 100 to a local participant in the room with PCD 100.
  • the reader may interact through a simple web or Android app based interface guided by a virtual PCD 100 through the process of picking a story and reading it.
  • the reader may read the words of the story as prompted by virtual PCD 100.
  • the reader's voice will be played back by the physical PCD 100 to the listener, with preset filters applied to the reader's voice so that the reader can "do the voices" of the characters in an incredibly compelling way even if he/she has no inherent ability to do this.
  • the reader's interface may also show the "PCD's Eye View” video feed of the listener, and PCD 100 may use it's "Cameraman” ability to keep the listener in the video.
  • Physical PCD 100 may also react to the story with short animations at appropriate times (shivers of fear, etc.), and PCD's 100 eye, described above, may morph into different shapes in support of story elements.
  • This functionality may be wrapped inside a PCD Call feature such that the reader and the listener can interrupt the story with conversation about it, etc.
  • the app may recognize that the reader has stopped reading the story, and pause the feature so the reader and listener can converse unfiltered.
  • the teller could prerecord the story and schedule it to be played back later using the Story Relay app described below.
  • a user may utilize PCD 100 to communicate with "in-network” members via a "push to talk” or "walkie-talkie” style interface.
  • This feature may be accessed via a single touch on the skin or a screen icon on PCD 100, or via a simple voice command "PCD 100, talk to Mom".
  • this feature is limited to only PCD -to-PCD conversation, and may only be useable if both PCDs 100 detect a user presence on their end of the link.
  • FIG. 8 there is illustrated a flowchart 800 of an exemplary and non-limiting embodiment.
  • a user/story teller may record a story at any time for PCD 100 to replay later.
  • Stories can be recorded in several ways:
  • PCD 100 the storyteller tells their story to a PCD 100, who records it for playback
  • Virtual PCD 100 web interface or Android app the user is guided by virtual PCD 100 to tell their story to a webcam. They also have the opportunity to incorporate more rich animations/sound effects/background music in these types of stories.
  • PCD 100 may replay the story according to the scheduling preferences set by the teller, at step 804.
  • the listener will be given the option to hear the story at the scheduled time, and can accept, decline, or reschedule the story.
  • PCD 100 may take still photos of the listener at a predetermined rate. Once the story is complete, PCD 100 may ask listener if he/she would like to send a message back to the storyteller, at step 806. If the user accepts, then at step 808, PCD 100 may enter the "Asynchronous Multimodal Messaging" feature and compile and send the message either to the teller's physical PCD 100 if they have one, or via virtual PCD 100 web link. The listener may have opportunity to incorporate a photo of him/herself listening to the story in the return message.
  • PCD 100 may incorporate a photo/memory maker feature whereby PCD 100 takes over the role of photographer for an event. There are two modes for this:
  • PCD 100 In this mode, the users who wish to be in the picture may stand together and say "PCD, take a picture of us”.
  • PCD 100 acknowledges, then uses verbal cues to center the person/s in the camera image, using cues like "back up”, “move left”, etc. When they are properly positioned PCD 100 tells them to hold still, then uses some sort of phrase to elicit a smile ("cheese”, etc.).
  • PCD 100 may use facial expression recognition to tell if they are not smiling and continue to attempt to elicit a smile. When all users in the image are smiling, PCD 100 may take several pictures, using auto-focus and flash if necessary.
  • a user may instruct PCD 100 to take pictures of an event for a predetermined amount of time, starting at a particular time (or "now", if desired).
  • PCD 100 uses a combination of sound location and face recognition to look around the room and take candid pictures of the people in the room at a user defined rate. All photos generated may be stored locally in PCD 100 memory.
  • PCD 100 may inform a user that photos have been uploaded to the PCD 100 cloud. At that point, they can be accessed via the PCD 100 app or web interface, where a virtual PCD 100 may guide the user through the process of deleting, editing, cropping, etc. photos. They will then be emailed to the user or posted to Facebook, etc. In this "out of the box” version of this app, photos might only be kept on the PCD 100 cloud for a predetermined amount of time with permanent storage with filing/metatagging offered at a monthly fee as part of, for example, a "living legacy" app described below.
  • PCD 100 may thus operate to aid in enhancing interpersonal and social occasions.
  • an application or “app” may be configured or installed upon PCD 100 to access and operate one or more interface components of PCD 100 to achieve a social activity.
  • PCD 100 may include a factory installed app that, when executed, operates to interact with a user to receive one or more parameters in accordance with which PCD 100 proceeds to take and store one or more photos.
  • a user may say to PCD 100, "Please take at least one picture of every separate individual at this party.”
  • PCD 100 may assemble a list of party guests from an accessible guest list and proceed to take photos of each guest.
  • PCD 100 may remain stationary and query individuals as they pass by for their identity, record the instance, and take a photo of the individual.
  • PCD 100 may interact with guests and ask them to set PCD 100 in front of groupings of guests in order to take their photos. Over a period of time, such as the duration of the party, PCD 100 acquires one more photos of party guests in accordance with the user's wishes in fulfillment of the social goal/activity comprising documenting the social event.
  • PCD 100 may read and react to social cues. For example, PCD 100 may observe a user indicate to another person the need to speak more softly. In response, PCD 100 may lower the volume at which it outputs verbal communications. Similarly, PCD 100 may emit sounds indicative of satisfaction when hugged or stroked. In other embodiments, PCD 100 may emit or otherwise output social cues. For example, PCD 100, sensing that a user is running late for an appointment, may rock back and forth in a seemingly nervous state in order to hasten the rate of the user's departure.
  • PCD 100 may be configured with a calendar system to capture the business of a user and family outside of work.
  • PCDs 100 may be able to share and integrate calendars with those of other PCD 100s if their users give permission, so that an entire extended family with a PCD 100 in every household would be able to have a single unified calendar for everyone.
  • Items in PCD 100s calendar may be metatagged with appropriate information, initially the name of the family member(s) that the appointment is for, how they feel about the appointment/event, date or day-specific info (holidays, etc.) and the like.
  • Types of events that may be entered include, but are not limited to, wake up times, meal times, appointments, reminders, phone calls, household tasks/yardwork, etc. Note that not all events have to be set to a specific time - events may be scheduled predicated on sensor inputs, etc., for instance "remind me the first time you see me tomorrow morning to pack my umbrella".
  • Entry of items into PCD's 100 calendar may be accomplished in a number of ways.
  • One embodiment utilizes an Android app or web interface, where virtual PCD 100 guides the user through the process. It is at this point that emoticons or other interface can be used to tell PCD 100 how a user is feeling about apt/event.
  • Graphical depiction of a calendar in this mode may be similar to Outlook, allowing a user to see the events/appts of other network members.
  • PCD 100 Calendar may also have a feature for appointment de-confliction similar to what Outlook does in this regard.
  • users may also be able to add items to the calendar through a natural language interface ("PCD, I have a dentist appointment on Tuesday at 1PM, remind me half an hour earlier", or "PCD, dinner is at 5:30PM tonight”).
  • PCD natural language interface
  • User feeling if not communicated by a user, may be inquired afterward by PCD 100 (e.g., "How do you feel about that appointment?"), allowing appropriate emotional metatagging.
  • PCD 100 may pass along the reminder in one of two ways. If the user for whom the reminder was set is present in PCD 100's environment, he will pass along the reminder in person, complete with verbal reminder, animation, facial expressions, etc. Emotional content of facial expression may be derived from metatagging of an event such as through emoticon or user verbal inputs. His behaviors can also be derived from known context (for instance, he's always sleepy when waking up or always hungry at mealtimes). Expressions that are contextually appropriate to different events can be refreshed by authoring content periodically to keep it non-repetitive and entertaining.
  • PCD 100 can call out for them. In such an instance, if they are non-responsive to this, PCD 100 may text their phone with the reminder.
  • PCD 100 may be configured with a List Manager feature.
  • PCD 100 may, at the user's request, create to-do lists or shopping lists that can be texted to the user once they have left for the shopping trip.
  • the feature may be initiated by the user via a simple touch interface, or ideally, through a natural language interface.
  • a user may specify the type of list to be made (e.g., "grocery", “clothes", “to-do", or a specific type of store or store name).
  • PCD 100 may ask what is initially on the list, and the user may respond via spoken word to have PCD 100 add things to the list. At any later time, user may ask PCD 100 to add other items to the list.
  • PCD 100 may be able to parse everyday conversation to determine that an item should be added to the list. For example, if someone in the room says “we're out of milk", PCD 100 might automatically add that to the grocery list.
  • PCD 100 When the user is leaving for a trip to a store for which PCD 100 has maintained a list, the user may request PCD 100 to text the appropriate list to them, so that it will be available to them when they are shopping in the store. Additionally, if the user is away from PCD 100 but near a store, they may request the list to be sent through the Android or web app. [00235] Upon their return (i.e., the next time PCD 100 sees that user after they have requested the list to be texted to them), PCD 100 may ask how the trip went/whether the user found everything on the list. If "yes”, PCD 100 will clear the list and wait for other items to be added to it. If "no", PCD 100 will inquire about what was not purchased, and clear all other items from the list.
  • Users might also request to have someone else's PCD-generated list texted to them (pending appropriate permissions). For example, if an adult had given a PCD 100 to an elder parent, that adult could ask PCD 100 to send them the shopping list generated by their parent's PCD 100, so that they could get their parents groceries while they were shopping for their own, or they could ask PCD 100 for Mom's "to-do" list prior to a visit to make sure they had any necessary tools, etc.
  • PCD 100 may be configured with an "In the Know” feature.
  • PCD 100 may keep a user up to date on the news, weather, sports, etc. in which a user is interested. This feature may be accessed upon request using a simple touch interface, or, ideally, a natural language command (e.g., "PCD 100, tell me the baseball scores from last night").
  • a natural language command e.g., "PCD 100, tell me the baseball scores from last night”
  • the user may have the ability to set up "information sessions" at certain times of day. This may be done through a web or mobile app interface. Using this feature, PCD 100 may be scheduled to relay certain information at certain times of day. For instance, a user might program their PCD 100 to offer news after the user is awake. If the user says "yes", PCD 100 may deliver the information that the user has requested in his/her "morning briefing". This may include certain team scores/news, the weather, review of headlines from major paper, etc. PCD 100 may start with an overview of these items and at any point the user may ask to know more about a particular item, and PCD 100 will read the whole news item.
  • News items may be "PCD-ized". Specifically, PCD 100 may provide commentary and reaction to the news PCD 100 is reading. Such reaction may be contextually relevant as a result of AI generation.
  • PCD 100 may be configured with a mood, activity, and environment monitor feature in the form of an application for PCD 100.
  • This application may be purchased by a person who had already purchased PCD 100, such as for an elder parent.
  • a web interface or an Android app interface may be used to access the monitoring setup and status.
  • a virtual PCD 100 may guide the user through this process.
  • Some examples of things that can be monitored include (1) Ambient temperature in the room/house where PCD 100 is, (2) Activity (# of times a person walked by per hour/day, # of hours without seeing a person, etc.), (3) a mood of person/s in room: expressed as one of a finite set of choices, based upon feedback from sensors (facial expressions, laughter frequency, frequency of use of certain words/phrases, etc.) and (4) PCD 100 may monitor compliance to a medication regimen, either through asking if medication had been taken, or explicitly watching the medication be taken.
  • the status of the monitors that may have been set can be checked via the app or web interface, or in the case of an alert level being exceeded (e.g., it is too cold in the house, no one has walked by in a threshold amount of time), then a text could be sent by PCD 100 to a monitoring user.
  • PCD 100 may autonomously remind the user if certain conditions set by the monitoring user via the app or web interface are met such as, for example, shivering and asking the heat to be turned up if it is too cold.
  • PCD 100 may be configured with a Mood Ring feature.
  • the mood ring feature may make use of PCD's 100 sensors to serve as an indicator and even an influencer of the mood/emotional state of the user.
  • This feature may maintain a real time log of the user's emotional state.
  • This indicator may be based on a fusion of facial expression recognition, body temperature, eye movement, activity level and type, speech prosody, keyword usage, and even such simple techniques as PCD 100 asking a user how they are feeling.
  • PCD 100 will attempt to user verification techniques (such as asking) to correct his interpretations and make a better emotional model of the user over time.
  • PCD 100 interprets user body/facial/speech details to determine his emotional state. Over time, PCD 100 is able to accurately interpret user body/facial/speech details to determine the emotional state.
  • PCD 100 Once PCD 100 has determined the emotional state of the user, he reports this out to others at step 904. This can be done in a number of ways. To caregivers that are co-located (in hospital setting, for instance), PCD 100 can use a combination of lighting/face graphics/posture to indicate the mood of the person he belongs to, so that a caregiver could see at a glance that the person under care was sad/happy /angry /etc. and intervene (or not) accordingly.
  • PCD 100 could provide this emotional state data through a mobile/web app that is customizable in terms of which data it presents and for which time periods.
  • PCD 100 tries and effects a change in that mood, at step 906. This could happen autonomously, wherein PCD 100 tries to bring about a positive change in user emotional state through a process of story/joke telling, commiseration, game playing, emotional mirroring, etc.
  • a caregiver upon being alerted by PCD 100 that the primary user is in a negative emotional state, could instruct PCD 100 to say /try/do certain things that they may know will alleviate negative emotions in this particular circumstance.
  • PCD 100 may be configured with a Night Light feature.
  • PCD 100 may act as an animated nightlight if the user wakes in the middle of the night. If the right conditions are met (e.g., time is in the middle of the night, ambient light is very low, there has been stillness and silence or sleeping noises for a long time, and then suddenly there is movement or speaking), PCD 100 may wake gently, light a pompom in a soothing color, and perhaps inquire if the user is OK. In some embodiments, PCD 100 may suggest an activity or app that might be soothing and help return the user to sleep.
  • PCD 100 may be configured with a Random Acts of Cuteness feature.
  • PCD 100 may operate to say things/asking questions throughout the day at various times in a manner designed to be beloved or thought provoking.
  • this functionality does not involve free form natural language conversation with PCD 100, but, rather, PCD's 100 ability to say things that are interesting, cute, funny, etc. as fodder for thought/conversation.
  • PCD 100 may access a database, either internal to PCD 100 or located externally, of sayings, phrases, jokes, etc., that is created, maintained, and refreshed from time to time.
  • Data may come from, for example, weather, sports, news, etc. RSS feeds, crowd sourcing from other PCD 100s, and user profiles.
  • PCD 100 may connect to the cloud, give a user ID, etc., and request a bit from the data repository. As described above, the server will match a fact to the user preferences, day/date/time, weather in the user's home area, etc., to determine the best bit to deliver to that user.
  • this feature may function to take the form of a simple question where the question is specific enough to make recognition of the answer easier while the answers to such questions may be used to help build the profile of that user thus ensuring more fitting bits delivered to his/her PCD 100 at the right times.
  • a user may specifically request an Act of Cuteness through a simple touch interface or through a natural language interface.
  • this feature may employ a "like/dislike" user feedback solicitation so as to enable the algorithm to get better at providing bits of interest to this particular user.
  • PCD 100 may be configured with a DJ feature.
  • PCD 100 may operate to feature music playing, dancing, and suggestions from PCD 100. This feature may operate in several modes. Such modes or functions may be accessed and controlled through a simple touch interface (no more than 2 beats from beginning to desired action), or, in other embodiments, through a natural language interface. Music may be stored locally or received from an external source.
  • PCD 100 may use beat tracking to accompany the song with dance animations, lighting/color shows, facial expressions, etc.
  • PCD's 100 choice of song may depend on which mode is selected such as:
  • PCD 100 may play a specific song, artist, or album that the user selects.
  • PCD 100 may use mood metatags to select a song.
  • the user can give feedback on songs similar to Pandora, allowing PCD 100 to tailor weightings for future selections.
  • PCD 100 uses information from the web (date, day of the week, time of day, calendar events, weather outside, etc.) as well as from sensors 102, 104, 106, 108, 112 (e.g., number/activity level of people in the room, noise levels, etc.) to select songs to play and volumes to play them at, in order to create background ambience in the room.
  • information from the web date, day of the week, time of day, calendar events, weather outside, etc.
  • sensors 102, 104, 106, 108, 112 e.g., number/activity level of people in the room, noise levels, etc.
  • Users may have the ability to control volume or skip a song.
  • users may be able to request a specific song at any time, without leaving ambient music mode. The requested song might be played, and the user choice (as with volume changes) might be used in future selection weightings.
  • PCD 100 may also occasionally interject one or more choices into a stream of songs, or try to play a choice upon initiation of Jukebox or Moodbox Mode (in ambient music mode, PCD 100 may NOT do this).
  • PCD's music choices may be based on regularly updated lists from PCD 100, Inc., created by writers or by, for instance, crowd sourcing song selections from other PCDs.
  • PCD 100 Likes might also pull a specific song from a specific PCD 100 in the user's network - for instance PCD 100 may announce 'Your daughter is requesting this song all the time now!, and then play the daughter's favorite song.
  • PCD 100 may ask how it did (and might respond appropriately happy or sad depending on the user's answer), or may give the user a score on how well the user danced. PCD 100 may also capture photos of a user dancing and offer to upload them to a user's PCD profile, a social media site, or email them.
  • modes of functionality include:
  • PCD 100 chooses a song to play, and then uses sound location/face/skeleton tracking to acquire the user in the vis/RGBD camera field of view. As the user dances along to the music, PCD 100 may try to imitate the user's dance. If the user fails to keep time with the music, the music may slow down or speed up. At the end of the song, PCD 100 may ask how it performed in copying the moves of the user, or give the user a score on how well the user kept the beat. PCD 100 may also capture photos of the user dancing and offer to upload them to the user's PCD profile, a social media site, or email them to the user.
  • PCD 100 dances and the user tries to imitate the dance. Again, the playback of music is affected if the user is not doing a good job.
  • a separate screen shows a human dancer for both a user and PCD 100 to imitate. The user and PCD 100 both do their dance-alongs and then PCD 100 grades both itself and the user.
  • PCD 100 may be configured with a Story Acting/Animating feature.
  • PCD 100 may operate to allow a user to purchase plays for an interactive performance with PCD 100.
  • FIG. 10 there is illustrated a flowchart 1000 of an exemplary and non- limiting embodiment. The plays may be purchased outright and stored in the user's PCD Cloud profile, or they may be rented Netflix style, at step 1002.
  • Purchasing of plays/scenes may occur through, for example, an Android app or web interface, where a virtual PCD 100 may guide the user through the purchase and installation process.
  • a virtual PCD 100 may guide the user through the purchase and installation process.
  • users may select the play/scene they want to perform. This selection, as well as control of the feature while using it, may be accomplished via a simple touch interface (either PCD's 100 eye or body), or via a natural language interface.
  • PCD 100 may ask whether the user wants to rehearse or perform at step 1006, which will dictate the mode to be entered.
  • PCD 100 may begin by asking the user which character they want to be in the play. After this first time, PCD 100 will verify that choice if the play is selected again, and the user can change at any time. [00280] Rehearsal Mode
  • PCD 100 may offer to perform the play in order to familiarize the user with the play, at step 1010. The user may skip this if they are already familiar. If the user does want PCD 100 to perform the play, PCD 100 may highlight the lines for the user's role as the user performs a read through, at step 1012.
  • PCD 100 may begin to teach lines to the user, at step 1014. For each line, PCD 100 may announce the prompt and the line, and then show the words on touch screen 104 while the user recites the line. PCD 100 may use speech recognition to determine if the user is correct, and will keep trying until the user repeats the line correctly. PCD 100 may then offer the prompt to the user and let them repeat the line, again trying until the user can repeat the line appropriately to the prompt. PCD 100 may then move to the next line.
  • PCD 100 will do a run through with all prompts, checking for the proper line in response and prompting the user if necessary.
  • prompts can take the form of graphical at first, with the eye morphing into a shape that suggests the line. This might be the first attempt at a prompt, and if the user still cannot remember the line, then PCD 100 can progress to verbal prompting.
  • PCD 100 will do a full up performance of the play, pausing to let the user say their lines and prompting if the user stumbles or forgets. PCD 100 will use full sound effects, background music, animations, and lighting effects during this performance, even during user-delivered lines.
  • PCD 100 may generate a cartoon/animated version of the play, with the user's voice audio during their lines included and synced to the mouth of the character they play (if that is possible).
  • PCD 100 may also be configured to perform plays with multiple participants each playing their own character, and participants may be remote (e.g., on the other end of a teleflow).
  • PCD 100 may be configured to employ an additional feature of the Dancing PCD app described above.
  • a user may create a custom dance for PCD 100. This is created through a mobile or web app, allowing the user to pick the song and select dance moves to put together for PCD 100 to perform with the music. User may also let PCD 100 pick a dance move such that the dance is created collaboratively with PCD 100.
  • lighting/sound effects e.g., PCD saying "get down! ”
  • PCD 100 dances may be sent to other PCDs 100, shown to friends performed by the virtual PCD 100, saved online, etc. The user may also play other PCD 100 dances created by other PCD 100 users.
  • this feature allows the user to download or stream to their PCD 100 celebrity generated content.
  • Content is chosen through a web interface or Android app, where a Virtual PCD 100 may guide the user through the process of content purchase.
  • Content may be either:
  • PCD 100 may stream content that is being generated real time by a celebrity /pundit in a central location.
  • the content creator may also have the ability to real-time "puppet" PCD 100 to achieve animations/lighting/color effects to complement the spoken word.
  • no audio watermarking is necessary as the content creator will theoretically be watching event concurrently with user and making commentary real time. This might include political pundits offering commentary on presidential speeches, election coverage, etc., or a user's favorite athlete providing commentary on a sporting event.
  • a persistent companion device (PCD) 100 is adapted to reside continually, or near continually, within the environment of a person or persons.
  • the person is a particular instance of a person for which various parametric data identifying the person is acquired by or made available to the PCD.
  • PCD 100 may further recognize patterns in behavior (schedules, routines, habits, etc.), preferences, attitudes, goals, tasks, etc.
  • the identifying parametric data may be used to identify the presence of the person using, for example, voice recognition, facial recognition and the like utilizing one or more of the sensors 102, 104, 106, 108, 112 described above.
  • the parametric data may be stored locally, such as within a memory of PCD 100, or remotely on a server with which PCD 100 is in wired or wireless communication such as via Bluetooth, Wi-Fi and the like. Such parametric data may be inputted into PCD 100 or server manually or may be acquired by the PCD 100 over time or as part of an initialization process.
  • a user may perform an initialization procedure whereby the PCD 100 is operated/interacted with to acquire an example of the user's voice, facial features or the like (and other relevant factual info).
  • the PCD 100 may be operated/interacted with to acquire an example of the user's voice, facial features or the like (and other relevant factual info).
  • a family hub embodiment described mire fully below there may be a plurality if users forming a social network of users comprising an extended family. This data may be stored within the PCD 100 and may be likewise communicated by the PCD 100 for external storage such as, for example, at server.
  • PCD 100 may operate to additionally acquire other parametric data. For example, upon performing initialization comprising providing a sample voice signature, such as by reciting a predetermined text to PCD 100, PCD 100 may autonomously operate to identify the speaking user and acquire facial feature data required for facial identification. As PCD 100 maintains a persistent presence within the environment of the user, PCD 100 may operate over time to acquire various parametric data of the user.
  • PCD 100 operates to obtain relevant information about a person beyond their ID.
  • PCD 100 may operate to acquire background info, demographic info, likes, contact information (email, cell phone, etc.), interests, preferences, personality, and the like.
  • PCD 100 may operate to acquire text based/GUI/speech entered information such as during a "getting acquainted" interaction.
  • PCD 100 may also operate to acquire contact info and personalized parameterized information of the family hub (e.g., elder parent, child, etc.), which may be shared between PCDs 100 as well as entered directly into a PCD 100.
  • PCD 100 operates to facilitate family connection with the extended family.
  • daily information including, but not limited to, a person's schedule, events, mood, and the like may provide important context for how PCD 100 interacts, recommends, offers activities, offers information, and the like to the user.
  • contextual, longitudinal data acquired by PCD 100 facilitates an adaptive system that configures its functions and features to become increasingly tailored to the interests, preferences, and use cases of the user(s). For instance, if the PCD 100 learns that a user likes music, it can automatically download the "music attribute" from the cloud to be able to discover music likes, play music of that kind, and make informed music recommendations.
  • PCD 100 learns about a user's life.
  • PCD 100 can sense the user in the real world and it can gather data from the ecology of other devices, technologies, systems, personal computing devices, personal electronic devices that are connected to the PCD 100. From this collection of longitudinal data, the PCD 100 learns about the person and the patterns of activities that enable it to learn about the user and to configure itself to be better adapted and matched to the functions it can provide.
  • PCD 100 learns about your social/family patterns, Who the important people are in your life (your extended family), it learns about and tracks your emotions/moods, it learns about important behavioral patterns (when you tend to do certain things), it learns your preferences, likes, etc., it learns what you want to know about, what entertains you, etc.
  • PCD 100 is configured to interact with a user to provide a longitudinal data collection facility for collecting data about the interactions of the user of PCD 100 with PCD 100.
  • PCD 100 is configured to acquire longitudinal data comprising one or more attributes of persistent interaction with a user via interaction involving visual, auditory and tactile sensors 102, 104, 106, 108, 112.
  • visual, auditory and tactile sensations may be perceived or otherwise acquired by PCD 100 from the user as well as conveyed by PCD 100 to the user.
  • PCD 100 may incorporate camera sensor 106 to acquire visual information from a user including data related to the activities, emotional state and medical condition of the user.
  • PCD 100 may incorporate audio sensor 112 to acquire audio information from a user including data derived from speech recognition, data related to stress levels as well as contextual information such as the identity of entertainment media utilized by the user.
  • PCD 100 may further incorporate tactile sensor 102 to acquire tactile information from a user including data related to a user's touching or engaging in physical contact with PCD 100 including, but limited to, petting and hugging PCD 100.
  • a user may also use touch to navigate a touch screen interface of PCD 100.
  • a location of PCD 100 or a user may be determined, such as via a cell phone the user is carrying and used as input to give location context-relevant information and provide services.
  • visual, auditory and tactile sensations may be conveyed by PCD 100 to the user.
  • audio output device may be used to output sounds, alarms, music, voice instructions and the like and to engage in conversation with a user.
  • graphical element may be utilized to convey text and images to a user as well as operate to convey graphical data comprising a portion of a communication interaction between PCD 100 and the user. It can use ambient light and other cues (its LED pom-pom).
  • Tactile device 102 may be used to convey PCD 100 emotional states and various other data including, via, for example, vibrating, and to navigate the interface/content of the device. The device may emit different scents that suit the situation, mood, etc. of the user.
  • Information may be gathered through different devices that are connected to the PCD 100. This could come from third party systems data (e.g., medical, home security, etc.), mobile device data (music playlists, photos, search history, calendar, contact lists, videos, etc.), desktop computer data (esp. entered through the PCD 100 portal), or the like.
  • third party systems data e.g., medical, home security, etc.
  • mobile device data music playlists, photos, search history, calendar, contact lists, videos, etc.
  • desktop computer data esp. entered through the PCD 100 portal
  • interaction data may be stored on and transmitted between PCD 100 and a user via cloud data or other modes of connectivity (Bluetooth, etc.).
  • access may be enabled by PCD 100 to a user's cloud stored data to enable interaction with PCD 100.
  • PCD 100 may search the internet, use an app/service, or access data from the cloud - such as a user's schedule from cloud storage and use information derived there from to trigger interactions.
  • PCD 100 may note that a user has a breakfast appointment with a friend at 9:00 am at a nearby restaurant.
  • PCD 100 may interact with the user by speaking via audio device 110 to query if the user shouldn't be getting ready to leave.
  • PCD 100 may accomplish this feat by autonomously performing a time of travel computation based on present GPS coordinates and those of the restaurant.
  • PCD 100 may apply one or more algorithms to accessed online or cloud data to trigger actions that result in rapport building interactions between PCD 100 and the user.
  • People can communicate with PCD 100 via social networking, real-time or asynchronous methods, such as sending texts, establishing a real-time audio-visual connection, connecting through other apps/services (Facebook, twitter, etc.), and the like.
  • interaction data may be stored in proximity to or in a user's environment such as on a server or personal computer or mobile device, and may be accessible by the user.
  • PCD 100 may likewise store data in the cloud.
  • interaction data may be acquired via sensors external to PCD 100.
  • Activities log may store information recording activities engaged in by the user, by PCD 100 or by both the user and PCD 100 in an interactive manner.
  • activities log may record instances of PCD 100 and the user engaging in the game of chess.
  • There may additionally be stored information regarding the user's emotional state during such matches from which may be inferred the user's level of enjoyment.
  • PCD 100 may determine such things as how often the user desires to play chess, how long has it been since PCD 100 and the user last played chess, the likelihood of the user desiring to engage in a chess match and the like.
  • a device usage log may be stored and maintained that indicates when, how often and how the user prefers to interact with PCD 100.
  • both the activities log and the device usage log may be used to increase both the frequency and quality of interactions between PCD 100 and the user.
  • interaction data may be acquired via manual entry. Such data may be entered by the user directly into PCD 100 via input devices 102, 104, 106, 108, 112 forming a part of PCD 100 or into a computing device, such as a server, PDA, personal computer and the like, and transmitted or otherwise communicated to PCD 100, such as via Bluetooth or Wi-Fi/cloud. In other embodiments, interaction data may be acquired by PCD 100 via a dialog between PCD 100 and the user.
  • PCD 100 may engage in a dialog with the user comprising a series of questions with the user's answers converted to text via speech recognition software operating on PCD 100, on a server or in the cloud, with the results stored as interaction data.
  • speech recognition software operating on PCD 100, on a server or in the cloud, with the results stored as interaction data.
  • GUI or touch- based interaction may be engaged in a dialog with the user comprising a series of questions with the user's answers converted to text via speech recognition software operating on PCD 100, on a server or in the cloud, with the results stored as interaction data.
  • interaction data may be generated via a sensor 102, 104, 106, 108, 112 configured to identify olfactory data.
  • PCD 100 may be configured to emit olfactory scents.
  • GPS and other location determining apparatus may incorporated into PCD 100 to enhance interaction. For example, a child user may take his PCD 100 on a family road trip or vacation. While in transit, PCD 100 may determine its geographic location, access the internet to determine nearby landmarks and engage in a dialogue with the child that is relevant to the time and place by discussing the landmarks.
  • the results of such interactions may be transmitted at the time or at a later time to a remote storage facility whereat there is accumulated interaction data so acquired from a plurality of users in accordance with predefined security settings.
  • a centralized database of preferable modes of interaction may be developed based on a statistical profile of a user's attributes and PCD 100 acquired data, such as location. For instance, in the previous example, PCD 100 may determine its location as being on the National Mall near the Air and Space Museum and opposite the Museum of Natural History. By accessing a centralized database and providing the user's age and location, it may be determined that other children matching the user's age profile tend to be interested in dinosaurs. As a result, PCD 100 commences to engage in a discussion of dinosaurs while directing the user to the Museum of Natural History.
  • PCD 100 may modulate aspects of interaction with a user based, at least in part, upon various physiological and physical attributes and parameters of the user.
  • PCD 100 may employ gaze tracking to determine the direction of a user's gaze. Such information may be used, for example, to determine a user's interest or to gauge evasiveness.
  • a user's heart rate and breathing rate may be acquired.
  • a user's skin tone may be determined from visual sensor data and utilized to ascertain a physical or emotional state of the user.
  • PCD 100 may ascertain and interpret physical gestures of a user, such as waving or pointing, which may be subsequently utilized as triggers for interaction. Likewise, a user's posture may be assessed and analyzed by PCD 100 to determine if the user is standing, slouching, reclining and the like.
  • interaction between PCD 100 and a user may be based, at least in part, upon a determined emotional or mental state or attribute of the user.
  • PCD 100 may determine and record the rate at which a user is blinking, whether the user is smiling or biting his/her lip, the presence of user emitted laughter and the like to ascertain whether the user is likely to be, for example, nervous, happy, concerned, amused, etc.
  • PCD 100 may observe a user's gaze being fixated on a point in space while the user remains relatively motionless and silent in an otherwise silent environment and determine that the user is in a state of thought or confused.
  • PCD 100 may interpret user gestures such as nodding or shaking one's head as indications of mental agreement or disagreement.
  • the general attributes of the interface via which a user interacts may be configured and/or coordinated to provide an anthropomorphic or non-human based PCD 100.
  • PCD 100 is configured to display the characteristics of a non-human animal.
  • interaction between PCD 100 and a user may be enhanced by mimicking and/or amplifying an existing emotional predilection by a user for a particular animal.
  • PCD 100 may imitate a dog by barking when operating to convey an excited state.
  • PCD 100 may further be fitted with a tail like appendage that may wag in response to user interactions.
  • PCD 100 may output sounds similar to the familiar feline "meow" .
  • such interface attributes may vary over time to further enhance interaction by adjusting the aging process of the user and PCD 100 animal character.
  • a PCD 100 character based on a dog may mimic the actions of a puppy when first acquired and gradually mature in its behaviors and interactions to provide a sense on the part of the user that the relationship of the user and the PCD character is evolving.
  • PCD 100 may be configured to provide an anthropomorphic interface modeled on a human being.
  • a human being or "persona” may be pre -configured, user definable or some combination of the two. This may include impersonations where PCD 100 may take on the mannerisms and characteristics of a celebrity, media personality or character (e.g., Larry Bird, Jon Stewart, a character from Downton Abby, etc.).
  • the persona, or "digital soul" of PCD 100 may be stored (e.g. in the cloud), in addition to being resident on PCD 100, external to PCD 100 and may therefore be downloaded and installed on other PCDs 100.
  • These other PCDs can be graphical (e.g., its likeness appears on the users mobile device) or into another physical PCD 100 (e.g., a new model).
  • PCD 100 can also be of a synthetic or technological nature.
  • PCD 100 functions as personified technology wherein device PCD 100 is seen to have its own unique persona, rather than trying to emulate something else that already exists such as a person, animal, known character and the like.
  • proprietary personas may be created for PCD 100 that can be adapted and modified over time to better suit its user.
  • the prosody of a user's PCD 100 may adapt over time to mirror more closely that of its user's own prosody as such techniques build affinity and affection.
  • PCD 100 may also change its graphical appearance to adapt to the likes and preferences of its user in addition to any cosmetic or virtual artifacts its user buys to personalize or customize PCD 100.
  • the digital soul of PCD 100 defines characteristics and attributes of the interface of PCD 100 as well as attributes that affect the nature of interactions between user and PCD 100. While this digital soul is bifurcated from the interaction data and information utilized by PCD 100 to engage in interaction with a user, the digital soul may change over time in response interaction with particular users. For example to separate users each with their own PCD 100 may install an identical digital soul based, for example, on a well know historical figure, such as Albert Einstein. From the moment of installation on the two separate PCDs 100, each PCD 100 will interact in a different manner depending on the user specific interaction data generated by and accessible to PCD 100.
  • the Digital Soul can be embodied in a number of forms, from different physical forms (e.g., robotic forms) or digital forms (e.g., graphical avatars).
  • PCD 100 provides a machine learning facility for improving the quality of the interactions based on collected data.
  • the algorithms utilized to perform the machine learning may take place on PCD 100, on a computing platform in communication with PCD 100.
  • PCD 100 may employ association conditioning in order to interact with a user to provide coaching and training. Association, or "operant" conditioning focuses on using reinforcement to increase a behavior. Through this process, an association is formed between the behavior and the consequences for that behavior. For example, PCD 100 may emit a happy noise when a user wakes up quickly and hops out of bed as opposed to remaining stationary.
  • PCD 100 Over time, this interaction between PCD 100 and the user operates to motivate the user to rise more quickly as the user associates PCDs 100 apparent state of happiness with such an action.
  • PCD 100 may emit encouraging sounds or words when it is observed that the user is exercising.
  • PCD 100 serves to provide persistent positive reinforcement for actions desired by the user.
  • PCD 100 may employ one of a plurality of types of analysis known in the art when performing machine learning including, but not limited to temporal pattern modeling and recognition, user preference modeling, feature classification, task/policy modeling and reinforcement learning.
  • PCD 100 may employ a visual, audio, kinesthetic, or "VAK", model for identifying a mode of interaction best suited to interacting with a user.
  • PCD 100 may operate to determine the dominant learning style of a user.
  • PCD 100 may employ charts or illustrations, such as on a graphic display 104 forming a part of PCD 100 to convey information to the user.
  • PCD 100 may operate to issue questions and other prompts to a user to help them stay alert in auditory environments.
  • PCD 100 may commence new interactions with a brief explanation of what is coming and may conclude with a summary of what has transpired.
  • PCD 100 may operate to interact with the user via kinesthetic and tactile interactions involving movement and touch. For example, to get a user up and active in the morning, PCD 100 may engage in an activity wherein PCD 100 requests a hug from the user. In other embodiments, to highlight and reinforce an element of a social interaction, PCD 100 may emit a scent related to the interaction.
  • PCD 100 operates to give a remote person a physically embodied and physically socially expressive way to communicate that allows people to "stay in the flow of their life” rather than having to stop and huddle in front of a screen (modern video conferencing).
  • PCD 100 provides support for casual interactions, as though a user were visiting someone in their house. A user may be doing other activities, such as washing dishes, etc. and still be carrying on a conversation because of how the PCD 100 can track the user around the room.
  • PCD 100 is designed to have its sensors and outputs carry across a room, etc. Core technical aspects include
  • a user may control the PCD 100's camera view, and it can also help to automate this by tracking and doing the inverse kinematics to keep its camera on the target object.
  • PCD 100 may render a representation of you (video stream, graphics, etc.) to the screen in a way that preserves important non-verbal cues like eyecontact.
  • PCD 100 may mirror the remote person's head pose, body posture so that person has an expressive physical presence. PCD 100 may also generate its own expressive body movements to suit the situation, such as postural mirroring and synchrony to build rapport.
  • PCD 100 may further trigger fun animations and sounds. So a user may either try to convey yourself accurately as you, or as a fun character. This is really useful for connected story reading, where a grandma can read a story remotely with her grandchild, while taking on different characters during the story session. [00326] PCD 100 may track who is speaking to automatically shift its gaze/your camera view to the speaker (to reduce the cognitive load in having to manually control the PCD 100)
  • PCD 100 may have a sliding autonomy interface so that the remote user can assert more or less direct control over the PCD 100, and it can use autonomy to supplement.
  • PCD 100 may provide a user with a wide field of view (much better than the tunnel vision other devices provide/assume because you have to stay in front of it)
  • PCD 100 may be configured or adapted to be positioned in a stable or balanced manner on or about a variety of surfaces typical of the environment in which a user lives and operates.
  • generally planar surfaces of PCD 100 may be fabricated from or incorporate, at least in part, friction pads which operate to prevent sliding of PCD 100 on smooth surfaces.
  • PCD 100 may employ partially detachable or telescoping appendages that may be either manually or automatically deployed to position PCD 100 on uneven surfaces.
  • the device may have hardware accessories that enable it to locomote in the environment or manipulate objects. It may be equipped with a laser pointer or projector to be able to display on external surfaces or objects.
  • PCD 100 may incorporate friction pads on or near the extremities of the appendages to further reduce slipping.
  • PCD 100 may incorporate one or more suction cups on an exterior surface or surfaces of PCD 100 for temporary attachment to a surface.
  • PCD 100 may incorporate hooks, loops and the like for securing PCD 100 in place and/or hanging PCD 100.
  • PCD 100 is adapted to be portable by hand. Specifically, PCD 100 is configured to weigh less than 10 kg and occupy a volume of no more than 4,000 cm 3 . Further, PCD 100 may include an attached or detachable strap or handle for use in carrying PCD 100.
  • PCD 100 is configured to be persistently aware of, or capable of determining via computation, the presence or occurrence of social cues and to be socially present. As such, PCD 100 may operate so as to avoid periods of complete shutdown. In some embodiments, PCD 100 may periodically enter into a low power state, or "sleep state", to conserve power. During such a sleep state, PCD 100 may operate to process a reduced set of inputs likely to alert PCD 100 to the presence of social cues, such as a person or user entering the vicinity of PCD 100, the sound of a human voice and the like. When PCD 100 detects the presence of a person or user with whom PCD 100 is capable of interacting, PCD 100 may transition to a fully alert mode wherein more or all of PCDs 100 sensor inputs are utilized for receiving and processing contextual data.
  • PCD 100 may augment being in a sleep state by emitting white noise or sounds mimicking snoring.
  • PCD 100 senses the presence of the user and proceeds to transition to a fully alert or powered up mode by, for example, greeting the user with a noise indicative of waking up, such as a yawn.
  • Such actions serve as queues to begin interactions between PCD 100 and a user.
  • PCD 100 is adapted to monitor, track and characterize verbal and nonverbal signals and cues from a user.
  • cues include, but are not limited to, gesture, gaze direction, word choice, vocal prosody, body posture, facial expression, emotional cues, touch and the like. All such cues may be captured by PCD 100 via sensor devices 102, 104, 106, 108, 112.
  • PCD 100 may further be configured to adapt and adjust its behavior to effectively mimic or mirror the captured cues. By so doing, PCD 100 increases rapport between PCD 100 and a user by seeming to reflect the characteristics and mental states of the user. Such mirroring may be incorporated into the personality or digital soul of PCD 100 for long-term projection of said characteristics by PCD 100 or may be temporary and extend, for example, over a period of time encompassing a particular social interaction.
  • PCD 100 may add the phrase to the corpus of interaction data for persistent use by PCD 100 when interacting with the user in the future.
  • PCD 100 may mimic transient verbal and non-verbal gestures in real or near real time. For example, if PCD 100 detects is raised frequency of a user's voice coupled with an increased word rate indicative of excitement, PCD 100 may commence to interact verbally with the user in a higher than normal frequency with an increased word rate.
  • PCD 100 may project a distinct persona or digital soul via various physical manifestations forming a part of PCD 100 including, but not limited to, body form factor, physical movements, graphics and sound.
  • PCD 100 may employ expressive mechanics.
  • PCD 100 may incorporate a movable jaw appendage that may be activated when speaking via the output of an audio signal. Such an appendage may be granted a number of degrees of freedom sufficient to mimic a smile or a frown as appropriate.
  • PCD 100 may be configured with one or more "eye like” accessories capable of changing a degree of visual exposure. As a result, PCD 100 can display a "wide eyed" expression in response to being startled, surprised, interested and the like.
  • PCD 100 may detect its posture or position in space to transition between, for example, a screen mode and an overall mode. For example, if PCD 100 incorporates a screen 104 for displaying graphical information, PCD 100 may transition from whatever state it is in to a mode that outputs information to the screen when a user holds the screen up to the user's face and into a position from which the user can view the display.
  • one or more pressure sensors forming a part of PCD 100 may detect when a user is touching PCD 100 in a social manner. For example, PCD 100 may determine from the pattern in which more than one pressure sensors are experiencing pressure that a user is stroking, petting or patting PCD 100. Different detected modes of social touch may serve as triggers to PCD 100 to exhibit interactive behaviors that encourage or inhibit social interaction with the user.
  • PCD 100 may be fitted with accessories to enhance the look and feel of PCD 100.
  • accessories include, but are not limited to, skins, costumes, both internal and external lights, masks and the like.
  • the persona or digital soul of PCD 100 may be bifurcated from the physical manifestation of PCD 100.
  • the attributes comprising a PCD 100 persona may be stored as digital data which may be transferred and communicated, such as via Bluetooth or Wi-Fi to one or more other computing devices including, but not limited to, a server and a personal computing device.
  • a personal computing device can be any device utilizing a processor and stored memory to execute a series of programmable steps.
  • the digital soul of PCD 100 may be transferred to a consumer accessory such as a watch or a mobile phone. In such an instance, the persona of PCD 100 may be effectively and temporarily transferred to another device.
  • the transferred instance of PCD 100 may continue to sense the environment of the user, engage in social interaction, and retrieve and output interaction data. Such interaction data may be transferred to PCD 100 at a later time or uploaded to a server for later retrieval by PCD 100.
  • PCD 100 may exhibit visual patterns, which adjust in response to social cues. For example, display 104 may emit red light when excited and blue light when calm. Likewise, display 104 may display animated confetti falling in order to convey jubilation such as when a user completes a task successfully.
  • the textures and animations for display may be user selectable or programmable either directly into PCD 100 or into a server or external device in communication with PCD 100.
  • PCD 100 may emit a series of beeps and whistles to express simulated emotions.
  • the beeps and whistles may be patterned upon patterns derived from the speech and other verbal utterances of the user.
  • the beeps, whistles and other auditory outputs may serve as an auditory signature unique to PCD 100.
  • variants of the same auditory signature may be employed on a plurality of PCDs 100, such as a group of "related" PCDs 100 forming a simulated family, to indicate a degree of relatedness.
  • PCD 100 may engage in anamorphic transitioning between modes of expression to convey an emotion.
  • PCD 100 may operate a display 104 to transition from a random or pseudorandom pattern or other graphic into a display of a smiling or frowning mouth as a method for displaying human emotion.
  • PCD 100 may emit scents or pheromones to express emotional states.
  • PCD 100 may be provided with a back story in the form of data accessible to PCD 100 that may for the basis of interactions with users.
  • data may comprise one or more stories making reference to past events, both real and fictional, that form a part of PCDs 100 prior history.
  • PCD 100 may be provided with stories that may be conveyed to a user via speech generation that tell of past occurrences in the life of PCD 100.
  • Such stories may be outputted upon request by a user of may be triggered by interaction data.
  • PCD 100 may discern from user data that today is the user's birthday.
  • PCD 100 may be triggered to share a story with the user related to a past birthday of PCD 100.
  • Data comprising the back story may be centrally stored and downloaded to PCD 100 upon request by a user or autonomously by PCD 100.
  • Back stories may be generated and stored by a manufacturer of PCD 100 and made available to a user upon request.
  • a manufacturer may receive as input a request for a back-story for a PCD 100 modeled on a dog associated with a user interested in sports, particularly, baseball and the Boston Red Sox.
  • the manufacturer or third party back-story provider may generate a base back story, at step 1104.
  • the story may comprise relatively generic dog stories augmented by more particular stories dealing with baseball to which are added details related to the Red Sox.
  • the back-story may be encoded with variables that will allow for further real time customization by PCD 100.
  • PCD 100 may be provided with an executable module or program for managing a co-nurturance feature of PCD 100 whereby the user is encouraged to care for the companion device.
  • a co- nurturance module may operate to play upon a user's innate impulse to care for a baby by commencing interaction with a user via behavior involving sounds, graphics, scents and the like associated with infants.
  • Rapport between PCD 100 and a user may be further encouraged when a co-nurturance module operates to express a negative emotion such as sadness, loneliness and/or depression while soliciting actions from a user to alleviate the negative emotion. In this way, the user is encouraged to interact with PCD 100 to cheer up PCD 100.
  • PCD 100 may include a module configured to access interaction data indicative of user attributes, interactions of the user of PCD 100 with PCD 100, and the environment of the user of PCD 100.
  • FIG. 1200 there is illustrated a flowchart of an exemplary and non-limiting embodiment.
  • the interaction data is accessed.
  • the interaction data may be stored in a centralized data collection facility. Once retrieved and stored, at step 1206, the interaction data may be utilized to anticipate a need state of the user. Once a need state is identified, it can be utilized to proactive ly address a user's needs without reliance on a schedule for performing an action, at step 1208.
  • a user's physical appearance, posture and the like may form the basis for identifying a need state.
  • the identification of a need state may be supplemented by schedule data, such as comprising a portion of interaction data.
  • schedule data such as comprising a portion of interaction data.
  • a schedule may indicate that it is past time to fulfill a user's need to take a dose of antibiotics.
  • PCD 100 may ascertain a user's need state, in part, from data derived from facial analysis and voice modulation analysis.
  • PCD 100 may be used as a messenger to relay a message from one person to another.
  • Messages include, but are not limited to audio recordings of a sender's voice, PCD 100 relaying a message in character, dances/animations/sound clips used to enhance the message and songs.
  • PCD 100 is embodied as an app on a smart device.
  • the sender may open the app, and selects a message and associated sounds, scheduling, etc.
  • a virtual instance of PCD 100 in the app may walk the user through the process.
  • a sender/user may instruct PCD 100, via a simple touch interface or a natural language interface, to tell another person something at some future time. For example a user might say "PCD, when my wife comes into the kitchen this morning, play her X song and tell her that I love her".
  • Sender might also have PCD 100 record his/her voice to use as part of the message.
  • the message may be delivered by a different PCD 100 at another location.
  • a user/sender can, for instance, tweet a message to a specific PCDs 100 hashtag, and PCD 100 will speak that message to the user/recipient.
  • Emoticons may also be inserted into the message, prompting a canned animation/sound script to be acted out by PCD 100.
  • PCD 100 may be used to generate messages for users who don't have PCDs. Such messages may be generated in the form of a weblink, and may incorporate a Virtual PCD 100 for delivering the message just as a physical PCD 100 would if the receiver had one.
  • PCD 100 may be configured to receive messages from persons, such as friends and family of the user, wherein the messages trigger actions related to emotions specified in the messages. For example, a person may text a message to a PCD 100 associated with a user within which is embedded an emoticon representing an emotion or social action that the sender of the message wishes to convey via PCD 100. For example, if a sender sends a message to PCD 100 reading "Missing you a lot OX", PCD 100 may, upon receiving the message, output, via a speech synthesizer, "In coming message from Robert reads 'Missing you a lot'" while simultaneously emitting a kissing sound, displaying puckered lips on a display or similar action. In this way, message senders may annotate their messages to take advantage of the expressive modalities by which PCD 100 may interact with a user.
  • the method comprises providing a persistent companion device (PCD) at step 1302.
  • the method further comprises inputting at least one of a verbal and nonverbal signals from a user selected from the group consisting of gesture, gaze direction, word choice, vocal prosody, body posture, facial expression, emotional cues and touch, at step 1304.
  • the method further comprises adjusting a behavior of the PCD to mirror the at least one of a verbal and nonverbal signals, at step 1306.
  • PCD 100 may utilize a user interface to display a recurring, persistent, or semi-persistent, visual element, such as an eye, during an interaction with a user.
  • a visual element such as an eye
  • the visual element 1400 comprising a lighter circle indicative of an iris or reflection on the surface of the eye, may shift its position to the bottom of the question mark as the eye morphs or otherwise smoothly transitions into a question mark visual element 1400"' via intermediary visual elements 1400', 1400".
  • the ability of the visual element to morph as described and illustrated results in high-readability.
  • a visual element 1500 in instances where the eye is intended to morph into a shape that is too visually complex for the eye, may "blink" as illustrated to transition into the more visually complex shape 1500'.
  • the visual element of the eye 1500 "blinks" to reveal a temperature or other weather related variable shape 1500'.
  • a mouth symbol may be formed or burrowed out of the surface area of the eye visual element.
  • the color of the visual element may be altered to reinforce the displayed expression.
  • the PCD 100 may have and exhibit "skills," as compared to applications that run on conventional mobile devices like smartphones and tablets. Just like applications that run on mobile platforms like iOS and Android, the PCD 100 may support the ability to deploy a wide variety of new skills.
  • a PCD skill may comprise a JavaScript package, along with assets and configuration files that may invoke various JavaScript APIs, as well as feed information to an execution engine. As a result, both internal and external developers may be supported in developing new skills for the PCD 100.
  • any new social robot skill is capable of being written entirely in Javascript that relates to a set of JavaScript APIs that comprise the core components of a software development kit (SDK) for developing new skills.
  • SDK software development kit
  • a set of tools such as an expression tool suite and a behavior editor, may allow developers to create configuration files that feed into the execution engine, facilitating simpler and more rapid skill development as well as the use of previously developed skills.
  • FIG. 17 there is illustrated an exemplary and non-limiting embodiment of a platform for enabling a runtime skill for a PCD 100.
  • various inputs 1700 are received which include, but are not limited to, imagery from a stereo RGB camera, a microphone array and touch sensitive sensors.
  • Inputs 1700 may come via a touch screen.
  • Inputs 1700 may form an input to sensory processing module 1702 at which processing is performed to extract information from and to categorize the input data.
  • Inputs may come from devices or software applications external to the device, such as web applications, mobile applications, Internet of Things (IoT) devices, home automation devices, alarm systems, and the like.
  • IoT Internet of Things
  • Examples of forms of processing that may be employed in sensory processing module include, but are not limited to, automated speech recognition (ASR), emotion detection, facial identification (ID), person or object tracking, beam forming, and touch identification.
  • ASR automated speech recognition
  • ID facial identification
  • the results of the sensory processing may be forwarded as inputs to execution engine 1704.
  • the execution engine 1704 may operate to apply a defined skill, optionally receiving additional inputs 1706 in the form of, for example, without limitation, one or more of an input grammar, a behavior tree, JavaScript, animations and speech/sounds.
  • the execution engine 1704 may similarly receive inputs from a family member model 1708.
  • the execution engine 1704 may output data forming an input to expression module 1710 whereat the logical defined aspects of a skill are mapped to expressive elements of the PCD 100 including, but not limited to, animation (e.g., movement of various parts of the PCD), graphics (such as displayed on a screen, which may be a touchscreen, or movement of the eye described above), lighting, and speech or other sounds, each of which may be programmed in the expression module 1710 reflect a mode, state, mood, persona or the like of the PCD as described elsewhere in this disclosure.
  • the expression module 1710 may output data and instructions to various hardware components 1712 of a PCD 100 to express the skill including, but not limited to, audio output, a display, lighting elements, and movement enabling motors. Outputs may include control signals or data to device or applications external to the PCD 100, such as IoT devices, web applications, mobile applications, or the like.
  • a logic level 1800 may communicate with a perceptual level 1802.
  • Perceptual level 1802 may detect various events such as vision function events via vision function module 1804, an animation event via expression engine 1806 and a speech recognition event via speech recognizer 1806. Communication between logic level 1800 and perceptual level 1802 may serve to translate perceived events into expressed skills.
  • JavaScript APIs may exist for various types of sensory input.
  • JavaScript APIs may exist for various expression output.
  • JavaScript APIs may also exist for the execution engine 1704, which in turn may invoke other existing JavaScript APIs.
  • JavaScript APIs may exist for information stored within various models, such as a family member model 1708.
  • the execution engine 1704 use any of these APIs, such as by extracting information via them for use in the execution engine 1704. In embodiments, developers who do not use the execution engine may directly access the family member model 1708.
  • the PCD 100 may learn, such as using machine learning, about information, behavioral patterns, preferences, use case patterns, and the like, such as to allow the PCD 100 to adapt and personalize itself to one or more users, to its environment, and to its patterns of usage.
  • Such data and the results of such learning may be embodied in the family member model 1708 for the PCD 100.
  • Sensory input APIs may include a wide range of types, including automated speech recognition (ASR) APIs, voice input APIs, APIs for processing other sounds (e.g., for music recognition, detection of particular sound patterns and the like), APIs for handling ultrasound or sonar, APIs for processing electromagnetic energy (visible light, radio signals, microwaves, X-rays, infrared signals and the like), APIs for image processing, APIs for handling chemical signals (e.g., detection of smoke, carbon monoxide, scents, and the like) and many others.
  • ASR automated speech recognition
  • voice input APIs may include a wide range of types, including voice input APIs, APIs for processing other sounds (e.g., for music recognition, detection of particular sound patterns and the like), APIs for handling ultrasound or sonar, APIs for processing electromagnetic energy (visible light, radio signals, microwaves, X-rays, infrared signals and the like), APIs for image processing, APIs for handling chemical signals (e.g., detection of smoke, carbon monoxide, scent
  • timestamps may be provided to allow merging of various disparate sensory input types.
  • timestamps may be provided with a speech recognizer to allow merging of recognized speech with other sensory input.
  • ASR may be used to enroll various speakers.
  • a speech tool suite may be provided for the speech interface of the PCD 100.
  • Sound and TTS APIs may allow the PCD 100 to play audio files, speak words from a string of text, or the like. This may be either constant or the content of a string variable, an arbitrary amount of silence, or any arbitrary combination of them.
  • a developer can specify a command such as: Speak ("beep.wav”, NAME, ": SIL 3sec", "I am so happy to see you”), resulting in a beeping sound, speaking a particular name represented by populating NAME variable with an actual name, a silent period of three seconds, then the greeting.
  • Text may be expressed in SSML (Speech Synthesis Markup Language).
  • Simple text may be spoken according to conventional punctuation rules.
  • the PCD SDK may include methods to upload content assets, like audio files, as well as to set properties of audio output, such as volume.
  • the social robot may be configured to play various different formats, such as .wav, .mp3, and the like.
  • Assets may be stored in various libraries, such as in the cloud or a local computing device.
  • the PCD SDK may allow the PCD to search for assets, such as by searching the Internet, or one or more sites, for appropriate content, such as music, video, animations, or the like.
  • a set of family member and utility APIs may be provided that act as a front end to data stored remotely, such as in the cloud. These APIs may also include utilities that developers may want to use (such as logging, etc.).
  • a set of execution engine APIs may be provided to enable interface with the execution engine 1704.
  • the execution engine 1704 may comprise an optional JavaScript component that can act on the configuration files created using several different tools, such as, without limitation, the Behavior Editor and the Expression Tool Suite.
  • the execution engine may also multiplex data from the Family Member store, again making it easier for developers to write skills.
  • the Family Member store can also include hardware accessories to expand the physical capabilities of the PCD 100, such as projectors, a mobile base for the PCD 100, manipulators, speakers, and the like, as well as decorative elements that allow users to customize the appearance of the PCD 100.
  • Asset creation may involve creating the skill's assets. It may not necessarily be the first step, but is often an ongoing task in the flow of creating a skill, where assets get refined or expanded as the skill itself gets developed.
  • the types of assets that may be created include animations, such as using a special tool within an expression tool suite to easily create new body and eye animations. Developers may also be able to repurpose body and eye animations in the "Developers" section of a PCD skills store. In embodiments developers may share their assets with consumers or other developers, such as on a skills store for the PCD 100 or other environment, such as a developer's portal.
  • Assets may also include sounds, such that developers may create their own sounds using their favorite sound editor, as long as the resource is in an appropriate format with appropriately defined characteristics.
  • Assets may include text-to-speech assets, leveraging a parametric TTS system, so that developers may create text-to-speech instances, and annotate these instances with various attributes (like "happy") that can modulate the speech.
  • Assets may include light visualizations, such as to control the LED lights on the PCD 100 (such as on the torso), in which case developers may use an expression tool suite to specify control. Note that developers can also repurpose LED light animations, such as from a "Developers" section of the PCD skills store as well.
  • Assets may include input grammars.
  • developers may use a speech tool suite to specify the various grammars they wish recognized.
  • the developer may write the skill itself using a behavior editor.
  • the behavior editor enables the logic governing the handling of the sensory input, as well as the control of the expression output. While most of this step can be done using a straightforward editor, the SDK may enable the addition of straight JavaScript code to enable a developer to do things that might be unique to the particular skill, such as exchanging data with one or more proprietary REST APIs, or the like.
  • a skill may exercise various aspects of the skill using a PCD simulator, which may occur in real time or near real-time.
  • the simulator may support the triggering of basic sensory input, and may also operate on a sensory input file created earlier via PCD's developer record mode. Inputs to the simulator may come from physical input to the PCD 100, from one or more sensors external to the PCD 100, directly from the simulator, or from external devices, such as IoT devices, or applications, such as web applications or mobile applications.
  • the simulator will support parts of the Expression System via WebGL graphic output, as well as text to represent the TTS output.
  • the development and simulation cycle can be in real time or near-real time, using a WYSrWYG approach, such that changes in a skill are immediately visible on the simulator and are responsive to dynamic editing in the simulator.
  • the developer may need to test the skill on the PCD 100 itself, since more complex behaviors (such as notifications) may not be supported within the simulator.
  • the developer may again drive the testing via sensory input files created via the PCD's record mode.
  • inputs may be streamed in real time or near real time from an external source.
  • the developer may submit the skill, such as to the host of the SDK, for certification.
  • Various certification guidelines may be created, such as to encourage consistency of behavior across different skills, to ensure safety, to ensure reliability, and the like.
  • the skill may be placed in the PCD store for access by users, other developers, and the like.
  • developers can also post assets (e.g., animations, skills, sounds, etc.) on a store for the PCD 100, a developer's portal, or the like.
  • assets e.g., animations, skills, sounds, etc.
  • LPS local perception space
  • Tools may include various tools related to speech in a speech tool suite of utilities to create new grammars, and annotate the text-to-speech output. In embodiments, tools may be used to apply filters or other sounds or audio effects over a spoken utterance. Tools may include a behavior editor to allow developers to author behavior, such as through behavior trees (e.g. the "brain") for a given skill.
  • An expression tool suite may include a suite of utilities to author expressive output for the social robot, which may include an animation simulator that simulates animated behavior of the PCD 100. This may comprise HTML or JavaScript with a webkit and an interpreter, such as V8 JS InterpreterTM from GoogleTM underneath. Behaviors and screen graphics may be augmented using standard web application code.
  • a simulated runtime environment may be provided as a tool for exercising various aspects of a skill.
  • LPS local perception space
  • FIG. 20 there are illustrated exemplary and non-limiting screen shots of a local perception space (LPS) visualization tool that may allow a developer to see the local perception space of the PCD 100, such as seen through a camera of the PCD 100.
  • LPS local perception space
  • This can be used to identify and track people within the view of the PCD 100.
  • this may grow in complexity and may comprise a three-dimensional world, with elements like avatars and other visual elements with which the PCD 100 may interact.
  • a speech tool suite may include tools related to hearing (e.g., an "ear" tool) and speaking. This may include various capabilities for importing phrases and various types of grammars (such as word spotting, statistical, etc.) from a library, such as yes/no grammars, sequences of digits, natural numbers, controls (continue, stop, pause), dates and times, non- phrase-spotting grammars, variables (e.g., $name), and the like. These may use ASR, speech- to-text capabilities, and the like and may be cloud-based or embedded on the PCD 100 itself.
  • the tool suite may include basic verification and debugging of a grammar, with application logic, in the simulator noted above.
  • a tool suite may include tools for developing NLU (natural language understanding) modes for the PCD 100.
  • Resources may be created using an on- device grammar compilation tool.
  • Resources may include tools for collecting data (e.g., like mechanical turk) and machine learning tools for training new models: such as for phrase spotting, person identification via voice, or other speech or sound recognition or understanding capabilities.
  • Grammars may publish output tags for GUI presentation and logic debugging.
  • a sensor library of the PCD 100 may be used to create sensory resources and to test grammar recognition performance. Testing may be performed for a whole skill, using actual spoken ASR. Phrase-spotting grammars may be created, tested and tuned.
  • a developer when invoking the recognizer, a developer may modify a restricted set of a recognizer's parameters (e.g. timeout, rejection, etc.) and/or invoke callback on recognition results (such as to perform text processing).
  • a recognizer's parameters e.g. timeout, rejection, etc.
  • the PCD behavior editor 2100 may enable developers/designers to quickly create new skills on a PCD 100.
  • the output file, defined in this section, drives the execution engine 1704. More details on the behavior editor 2100 are provided below.
  • the behavior authoring tool may comprise a behavior tree creator designed to be easy to use, unambiguous, extensible, and substantially WYSrWYG.
  • the behaviors themselves may comprise living documentation.
  • Each behavior may have a description and comment notation.
  • a behavior may be defined without being implemented. This allows designers to "fill in" behaviors that don't yet exist.
  • the PCD behavioral system may be, at its core, made up of very low level simple behaviors. These low level behaviors may be combined to make more high-level complex behaviors. A higher-level behavior can either be hand coded, or be made up of other lower level behaviors. This hierarchy is virtually limitless.
  • behavior hierarchies can be divided roughly into three levels: (1) atomic behaviors (the minimal set of behaviors to have a functioning behavior tree, generally including behaviors that are not necessarily dependent on the functions of the PCD 100); (2) PCD 100 based behaviors (behaviors that span the full capability set of the PCD 100, such as embodied in various JavaScript APIs associated with the social robot), (3) compound, high level behaviors (which may be either hand coded, or made up of parameterized behavior hierarchies themselves) and (4) skeleton behaviors (behaviors that are do not exist, are not fully implemented, or whose implementation is separate). Behavior hierarchies may be learned from the experience of the PCD 100, such as using machine learning methods such as reinforcement learning, among others.
  • Each function call in the social robot API such as embodied in a JavaScript API, may be represented as a behavior where it makes sense.
  • a skeleton behavior can be inserted into a behavior tree for documentation purposes and implemented later and bound at runtime. This allows a designer who needs a behavior that does not yet exist to insert this "Bound Type" which includes a description and possible outcomes of this behavior (Fail, Succeed, etc.) and have an engineer code the implementation later. If, during playback, the bound type exists then that type is bound to the implementation; otherwise, the PCD 100, or the simulation, may speak the bound behavior name and its return type and continue on in the tree.
  • the tools may also support the definition of perceptual hierarchies to develop sophisticated perceptual processing pipelines.
  • Outputs of these perceptual trees may be connected to behaviors, and the like.
  • the development platform and SDK support a suite of multi-modal libraries of higher-order perceptual classification modules (Reusable Multi-Modal Input-Output Modules) made available to developers.
  • a behavior tree may be made of these elementary behaviors: BaseBehavior - a leaf node; BaseDecorator - a behavior decorator; Parallel - a compound node; Sequence (and sequence variations) - a compound node; Select - a compound node; and Random (and random variations) - a compound node.
  • Atomic behaviors may be almost the raw function calls to the PCD JavaScript API, but wrapped as a behavior with appropriate timing. They span the entire API and may be very low level. Some examples include: LookAt; LoadCompileClip; and PlayCompiledClip. Compiled clips may have embedded events.
  • Compound/High-level behaviors may be high level behaviors that combine other high level and/or low level behaviors. These behaviors may be parametrized. Examples may include: BeAttentive; TakeRandomPictures; BeHappy; and StreamCameraToScreen. Behaviors can be goal directed, such as to vary actions to achieve a desired outcome or state with the world. For example, in the case of object tracking, a goal may be to track an object and keep it within the visual field. More complex examples would be searching to find a particular person or varying the behavior of the PCD 100, such as to make a person smile.
  • the mood or affective or emotive state of the PCD 100 can modify the behavior or style of behavior of the PCD 100. This may influence prioritization of goals or attention of the PCD. This may also influence what and how the PCD 100 learns from experience.
  • Readability of the behavior trees is important, especially when the trees become large . Take a simple case statement that branches the tree based on an utterance.
  • the formal way to declare a case statement is to create a Select behavior that has children from which it will "select" one to execute. Each child is decorated with a FailOnCondition that contains the logic for "selecting" that behavior. While formal, it makes it difficult to automatically see why one element might be selected over another without inspecting the logic of each decorator.
  • the description field though, may be manually edited to provide more context, but there is not necessarily a formal relationship between the selection logic and the description field.
  • FIG. 22 there is illustrated a formal way of creating branching logic according to an exemplary and non-limiting embodiment. Notice, the code of the first and second decorator 2200, 2202. FIG. 22 illustrates the formal relationship.
  • PCD 100 there are common branching patterns. A few of these include: grammar-based branching; touch-based branching; and vision-based branching.
  • the behavior tool GUI may simplify the tree visualization and provide a formal relationship between the "description” and the logic. This may be achieved by adding to the behavior tree editor an "Info" column, which is auto- populated with a description derived by introspecting the underlying logic.
  • the GUI tool may know that the specialized Select behavior called “GrammarSelect” is meant to be presented in a particular mode of the GUI.
  • the underlying tree structure may be exactly the same as in FIG. 22, but it may be presented in a more readable way.
  • select logic may be added as an argument to the behavior itself.
  • the added argument may be a string field that corresponds to the grammar tag that is returned, and the value of that argument may be automatically placed in the "Info" field.
  • the value of the added argument in each child behavior to GrammarSelect can be used to generate the correct code that populates the underlying SucceedElseFail decorator.
  • the PCD 100 speaks: "OK, I am going to take a picture of you now. Ready?"
  • the PCD 100 initiates a sequence, such as a TakePictureBehavior.
  • the PCD 100 detects a "no,” such as hearing a NoBehavior Speech:NO or sensing a Touch:NOAREA, then the user executes a GoHomeBehavior and initiates a speech behavior: robotSpeak "OK. Going back to home screen”.
  • a "no” such as hearing a NoBehavior Speech:NO or sensing a Touch:NOAREA
  • the PCD Speak is a basic behavior that randomizes a number of prompts and the corresponding animations (in embodiments, one can see the prompts and the animations if one double clicks the behavior, and the behavior editing box will pop up). It is important to have typing of this behavior, because the UI design can write the prompt while a developer is designing the application. Then one can automatically mine the behavior tree for all the prompts and create a manifest table for the voice talent, automatically create files names for the prompts, etc. (that alone will save a lot of design and skill-development time). [00408] The way interaction behavior is expressed in the example above, a developer can quickly understand what's going to occur, so as this will represent at the same time the design and the implementation.
  • the main window of the behavior editor may be a tree structure that is expandable and collapsible. This represents the tree structure of the behaviors. For each behavior in this view one can, in embodiments, drag, drop, delete, copy, cut, paste, swap with another behavior, add or remove one or more decorations, add a sibling above or below and add a child (and apply any of the above to the sibling or child).
  • This top level view should be informative enough that an author can get a good idea of what the tree is trying to do. This means that every row may contain the behavior and decorator names, a small icon to represent the behavior type, and a user-filled description field.
  • Each behavior may be parameterized with zero or more parameters.
  • a SimplePlay Animation behavior might take one parameter: the animation name. More complex behaviors will typically take more parameters.
  • a compound behavior may be created in the behavior tool as sub behaviors.
  • one may arbitrarily parameterize subtree parameters and bubble them up to the top of the compound behavior graphically.
  • Each parameter to a behavior may have a "type" associated with it.
  • the type of the parameter may allow the behavior authoring tool to help the user as much as possible to graphically enter valid values for each argument.
  • the following is an embodiment of a type inheritance structure with descriptions on how the tool will graphically help a user fill in an appropriate value: (1) CompiledClip: Editing a compiled clip may take a developer to the Animation Editor, which may be a timeline based editor; (2) String: A text box appears; (3) File: a file chooser appears: (4) Animation File: A file chooser window appears that lists available animations, which may include user generated animations and PCD-created animations.
  • a debug web interface may show a graphical representation of the tree, highlighting the current node that it is on. Start, stop, and advance buttons may be available. During pause, the tool may allow introspection on global watch variables and behavior parameter values. Furthermore, limited input interaction may remain available. This may include triggering a phrase or placing a person near the social robot, which may be able to add template knowledge about this person, for example.
  • developers may also share behavior models with other developers, such as sharing sensory- motor skills or modules. For example, if the PCD 100 has a mobile base, navigation and mapping models may be shared among developers.
  • the behavior logic classes may be modified by developers, such as to expand and provide variants on functionality.
  • the tools of the SDK may include an expression tool suite for managing expressions of the social robot.
  • a core feature of the Expression Tool Suite is the simulation window. With reference to FIG. 24, there is illustrated an embodiment of a simulation window where the main view in both screenshots simulates the animation of the PCD 100.
  • the top main view 2400 also simulates the focal point for the eye graphic.
  • the upper left portion in each screenshot simulates the screen graphic 2402, 2402'.
  • This simulation view may be written in WebGL, such that no special tools are required to simulate the social robot animation (other than having a current version of a browser, such as ChromeTM, running).
  • This simulation view need not be a separate tool unto itself; instead, it may be a view that can be embedded in tools that will enable the host of the PCD platform and other developers to create and test PCD animations, such as animations of various skills. It may either be invoked when a developer wants to play back a movement or animation in real time or by "stepping through" the animation sequentially.
  • a simulation tool for simulating behavior of social robot where the same code may be used for the simulation and for the actual running of the social robot.
  • FIG. 25 shows a conventional animation editor 2500 of the type that may be adapted for use with the PCD 100.
  • Key features of the animation editor may include a simulation window 2502 for playing back social robot animations, an animation editor 2504 where a developer/designer may place assets (movements, graphics, sound/TTS, LED body lighting, or complete animations) into a timeline, and an assets library 2506, where a developer/designer can pick existing assets for inclusion in the timeline. Assets may come from either the developer's hard drive, or from the PCD store.
  • the editor may allow for use of backgrounds or objects that may expand the virtual environment of the PCD, such as having avatars for simulating people, receiving inputs from a user interface, and the like.
  • the animation editor may have a mode that inverses controls and allows users to pose the robot and have an interface for setting keyframes based on that pose.
  • animating screen-based elements like an eye, overlay or background element may be done by touch manipulation, followed by keyframing of the new orientation/changes.
  • Variants of this approach may also be embodied, such as using the PCD 100 to record custom sound effects for animations (placeholder or final) would greatly speed up the creative process of design skills.
  • the tool may allow previewing animations via the animation editor directly on the PCD 100 to which the editor is connected.
  • the host of the PCD platform may support the ability to import assets and create new assets.
  • "Import" and "create” capabilities may support the various asset types, described herein. For example, creating a new movement may launch the social robot animation movement tool, while creating new TTS phrases launches the social robot's speaking tool.
  • Creating new LED lighting schemes may be specified via a dialog box or a lighting tool.
  • one or more tools may be embodied as a web application, such as a ChromeTM web application.
  • the given tool may save both the social robot animation itself, such as in a unique file type, such as a jba or .anim file, as well as a being saved as a social robot animation project file, such as of a jbp file type.
  • This approach may be extensible to new tools as the PCD 100 evolves with new capabilities, such as perceptual capabilities, physical capabilities, expressive capabilities, connectivity with new devices (e.g., augmented reality devices), and the like.
  • FIG. 26 there is illustrated an exemplary and non-limiting embodiment of a PCD animation editor 2500 that may be used, such as by invoking "New... Animation” from the PCD animation editor 2500.
  • a PCD animation editor 2500 At its core, there are radian positions that specify body positions (such as, in a three part robot, by controlling the radial positions bottom, middle, and top sections of the robot).
  • a set of sliders 2602 may be used to provide movement positions.
  • each set of positions may also be time- stamped, such that a complete movement is defined by an array of time/body-position values.
  • the remaining sliders may be used for controlling the joints in the eye animation.
  • the tool may also support the importing of a texture file to control the look of the eye graphic.
  • the tool may support simulating interaction with a touch screen.
  • the tool may enable various graphics beyond the eye, such as interactive story animations.
  • the PCD simulator may not only include the above-referenced simulation window, but also may have an interface/console for injecting sensory Input.
  • a key based access to a web portal associated with a PCD 100 may allow a developer to install skills on the social robot for development and testing.
  • the web portal on the PCD 100 may provide a collection of web-based development, debugging and visualization tools for runtime debugging of the skills of the PCD 100 while a user continues to interact with the PCD 100.
  • the PCD 100 may have an associated remote storage facility, such as a PCD cloud, which may comprise a set of hosted, web-based tools and storage capabilities that support content creation for animation of graphics, body movement, sound and expression.
  • the PCD 100 may have other off-board processing, such as speech recognition machine learning, navigation, and the like.
  • This may include web-based tools for creation of behavior trees for the logic of skills using behavior tree libraries, as well as a library of "plug- in" content to enhance developer skills, such as common emotive animations, graphics and sounds.
  • the interface may be extensible to interface with other APIs, such as home automation APIs and the like.
  • the methods and systems disclosed herein may address various security considerations. For example, skills may require authorization tokens to access sensitive platform resources such as video and audio input streams. Skills may be released as digitally signed "packages" through the social robot store and may be verified during installation. Developers may get an individual package, with applicable keys, as part of the SDK.
  • the PCD SDK may include components that may be accessed by a simple browser, such as a ChromeTM browser, with support for conventional web development tools, such as HTML5, CSS, JS and WebGL, as well as a canvas for visualization.
  • a simple browser such as a ChromeTM browser
  • web development tools such as HTML5, CSS, JS and WebGL
  • an open source version of a browser such as ChromeTM may be used to build desktop applications and be used for the simulator, development environment and related plugins, as well as being used for the PCD 100 application runtime.
  • This means code for the PCD 100, whether for development, simulation or runtime usage can typically run in regular browsers with minimal revision, such as to allow skills to be previewed on mobile or PC browsers.
  • the SDK described herein may support various asset types, such as input grammars (such as containing pre-tuned word-spotting grammars), graphics resources (such as popular graphics resources for displaying on the screen of the social robot); sounds (such as popular sound resources for playing on speakers of the PCD 100, sculpting prosody of an utterance of the PCD 100, adding filters to the voice, and other sound effects); animations (such as popular bundles of movement, screen graphics, sound, and speech packaged into coordinated animations); and behavior trees (such as popular behavior tree examples that developers can incorporate into skills).
  • input grammars such as containing pre-tuned word-spotting grammars
  • graphics resources such as popular graphics resources for displaying on the screen of the social robot
  • sounds such as popular sound resources for playing on speakers of the PCD 100, sculpting prosody of an utterance of the PCD 100, adding filters to the voice, and other sound effects
  • animations such as popular bundles of movement, screen graphics, sound, and speech packaged into coordinated animations
  • behavior trees such as popular behavior tree
  • the PCD SDK may enable managing a wide range of sensory input and control capabilities, such as capabilities relating to the local perceptual space (such as real time 3D person tracking, person identification through voice and/or facial recognition and facial emotion estimation); imaging (such as snapping photos, overlaying images, and compressing image streams); audio input (such as locating audio sources, selecting direction of an audio beam, and compressing an audio stream); speech recognition (such as speaker identification, recognition of phrases and use of phrase-spotting grammars, name recognition, standard speech recognition, and use of custom phrase-spotting grammars); touch (such as detecting the touching of a face on a graphic element and detecting touches to the head of the social robot); and control (such as using a simplified IFTTT, complex behavior trees with JavaScript or built- in behavior libraries).
  • capabilities relating to the local perceptual space such as real time 3D person tracking, person identification through voice and/or facial recognition and facial emotion estimation
  • imaging such as snapping photos, overlaying images, and compressing image streams
  • audio input such as locating audio sources,
  • the PCD SDK may also have various capabilities relating to the output of expressions and sharing, such as relating to movement (such as playing social-robot-created animations, authoring custom animations, importing custom animations and programmatic and kinematic animation construction); sound (such as playing social robot-created sounds, importing custom sounds, playing custom sounds, and mixing (such as in real time) or blending sounds); speech output (such as playing back pre-recorded voice segments, supporting correct name pronunciation, playing back text using text-to-speech, incorporating custom pre-recorded voice segments and using text-to-speech emotional annotations); lighting (such as controlling LED lights); graphics (such as executing social robot-created graphics or importing custom graphics); sharing a personalization or skill (such as running on devices within a single account, sharing with other developers on other devices, and distributing to a skills store).
  • movement such as playing social-robot-created animations, authoring custom animations, importing custom animations and programmatic and kinematic animation construction
  • methods and systems are provided for using a PCD 100 to coordinate a live performance of Internet of Things (IOT) devices.
  • IOT Internet of Things
  • a PCD 100 may automatically discover types and locations of IOT devices including speakers, lights, etc. The PCD 100 may then control lights and speakers to enhance a live musical performance. The PCD 100 may also learn from experience what preferences of the users are, such as to personalize settings and behaviors of external devices, such as music devices, IOT devices and the like.
  • an appropriately programmed PCD 100 that can (1) automatically discover types and locations of IOT devices including lights, speakers, etc. and (2) control these lights, speakers, etc., such as to enhance a live musical performance.
  • a properly designed PCD 100 can be employed as a meeting moderator in order to improve the dynamic and the effectiveness of meetings and conversations.
  • the goal of a meeting or a conversation is to discuss ideas and opinions as the participants in the course of the meeting contribute them. Often, the expectation is that participants will have the opportunity to contribute freely. Given these goals and expectations, an optimal meeting or conversation is one in which valuable and relevant contributions are made by all participants and all important ideas and opinions are contributed.
  • a number of human factors can limit the success of a meeting. For example, individuals are not always committed to the goals and expectations of the meeting. Also, the dynamic between individuals does not always align with the goals and expectations of the meeting. Sometimes the intent of a meeting's participants is explicitly counter to the goals of the meeting. For example, a meeting intended to catalyze a mutual discussion may be hijacked by a participant whose goal is to steer the discussion in a certain direction. In other cases, the dynamic between individuals may be hostile, causing the discussion to focus on the dynamic rather than the intended subject. Unintentional disruption can also minimize the success of a meeting. For example, a talkative, expressive participant can inadvertently monopolize the discussion, preventing others from contributing freely.
  • a social robot can act as an impartial, non-judgmental, expert moderator for meetings.
  • the PCD's biometric recognition capability can allow it to accurately track and measure the degree of participation by each individual in a meeting. This information can be presented as a real time histogram of participation.
  • the histogram can include: talk time per individual; back and forth between individuals; tone (positive/negative) projected by each individual; politeness; idiomatic expressions (positive and negative, encouraging and derogatory, insensitivity); cultural faux pas; emotional state of individuals (affective analysis); overall energy over time; and topics and subtopics discussed.
  • a PCD 100 can transcribe the verbal content and correlate it with social measurements to provide an objective tool for both capturing the discussion and evaluating the effectiveness of the meeting.
  • the PCD 100 can be configured with relevant thresholds so that it can interject during the meeting in order to keep the meeting on track.
  • the robot can interject when: someone is talking too much; the tone is too negative; inappropriate idiomatic expressions are used; insensitivity is detected; the overall energy is too low; and/or essential topics are not addressed.
  • the PCD 100 can help participants accomplish two important goals: conduct meetings more effectively and learning to collaborate and converse more effectively.
  • a meeting is an environment in which may be deployed a technology.
  • Meeting participants may include experts from a variety of disciplines with a variety of communication styles.
  • the PCD moderator can (in a non-judgmental way) present a real-time histogram - displayed on an appropriate display - that shows the relative talk time of all participants.
  • the social robot can (without judgment) attribute these expressions to the contributing participants, such as via a histogram.
  • the energy and tone of the meeting can also be measured and tracked in real time and compared to previous, effective meetings. As a learning opportunity, both effective and ineffective meetings can be compared using the statistics gathered by the PCD 100.
  • a social robot such as a PCD 100 may act as a moderator of meetings, recording and displaying relevant information, and improving the effectiveness and dynamics of meetings, which can translate into increased productivity and a better use of resources.
  • Social robots can play a unique role in message communication, because of their ability to command attention and because of the importance that humans assign to human-like communication.
  • the delivery mode can be chosen automatically by the social robot, so that the message receives an optimal degree of attention by the recipient.
  • the message-delivery advantages afforded by an individual social robot are amplified when multiple, networked social robots are robots are employed.
  • a number of PCDs - distributed among rooms/zones of a house - can coordinate their message-delivery efforts.
  • the physical presence of multiple PCDs throughout the household increases the window during which messages can delivered by the robots.
  • the network of PCDs can use their shared biometric recognition capabilities to track the whereabouts of intended recipients throughout the household.
  • the learning algorithms employed by the network of PCDs can generate predictive models about recipient movement and behavior to determine which PCD agent can most effectively deliver the message.
  • This same dynamic can be applied in any physical location and can be applied to businesses, museums, libraries, etc.
  • the physical forms of robots in a network of PCDs may vary.
  • the network may consist of PCDs that are stationary, mobile, ambulatory, able to roll, able to fly, embedded in the dashboard of a vehicle, embedded in an appliance like a refrigerator, etc.
  • the PCD's "brain” (its software, logic, learning algorithms, memory, etc.) can be replicated across a variety of devices, some of which have physically expressive bodies, and some of which do not - as in the case where the PCD 100 software is embodied in a mobile phone or tablet (replicated to a mobile device).
  • PCD software When a PCD's software is replicated to mobile device, that device can act as a fully cooperative, fully aware member of a social robot network, as well as with human beings in a social and/or technical network.
  • the degree to which a physically constrained PCD instance can contribute to the task of delivering messages depends on the functionality that it does possess, i.e. PCD software embodied in a typical smartphone will often be able to provide biometric recognition, camera surveillance, speech recognition, and even simulated physical expression by means of on-screen rendering.
  • a smartphone constrained PCD instance may generally be able to contribute fully formed messages that can then be delivered by other unconstrained PCDs within the network.
  • each instance can operate as a fully independent contributor. However, any given instance can also act as a remote interface (remote control) to another PCD instance on the network.
  • This remote interface mode can be active intermittently, or an instance can be permanently configured to act as the remote interface to another instance - as in the case where PCD software is embodied in a smartphone or smartwatch for the specific purpose of providing remote access to an unconstrained instance.
  • a message in a family home setting, may be created by a parent using an unconstrained (full-featured) robot unit in the kitchen.
  • the parent may create the message by speaking with the PCD 100.
  • the message may be captured as an audio/video recording and as a text transcript, such as from a speech-to-text technology, and delivered via text-to-speech (TTS). Delivery is scheduled some time in the future, such as after school today.
  • the intended recipient, Teenager may not be currently at home, but may arrive at the intended delivery time. In this example, the Teenager does come home after school, but does not enter the kitchen.
  • a tablet- embodied robot unit - embedded in the wall by the garage entrance - may recognize the teenager as she arrives. Because the tablet-embodied unit is networked with the kitchen robot unit, the upstairs robot unit, and the teenager's iPod-embodied unit, all four units cooperate to deliver the timely message.
  • the preferred delivery mode is via an unconstrained robot unit, so the tablet unit only mentions that a message is waiting. "Hi, [teenager], you have a message waiting.” The teenager might proceed to her room, bypassing the kitchen and upstairs robot units. When the delivery time arrives, the network of robot units can determine that because the teenager is not in proximity to an unconstrained robot unit, the next best way to deliver the message is via teenager's iPod-embodied unit. As a result, the iPod unit sounds an alert tone and delivers the message: "Hey, [teenager]. There is a brownie waiting for you in the kitchen.” When the teenager finally does enter the kitchen, the kitchen robot unit is already aware that the message was delivered and only offers a courtesy reminder: "Hi, [teenager].
  • the PCD 100 may also summarize the content of the message, and who it is from, such as "Carol, Jim left a message for you. Something about picking up the kids from soccer today.” This may help Carol decide when to listen to the message (immediately, or somewhat later).
  • a network of social robots can use biometric recognition, tracking, physical presence (such as based on a link between the PCD 100 and an associated mobile device), nonverbal and/or social cues, and active prompting to deliver messages that would otherwise be lost in the noise of multiple, crowded message channels.
  • listening to TV or playing video games loudly that are played loudly can be highly annoying to others in the vicinity with different tastes in what makes audio pleasing. Additionally many families have members who stay up later than others.
  • a proposed solution is to support a way for listeners to use headphones receiving audio wirelessly from a social robot so only the listener can hear him and they are free to listen as loudly as they desire with no compromise.
  • Variants may include Bluetooth headphones, a headphones bundle, a mobile receiver with wired headphones (such as using local Wi-Fi or Bluetooth), and the like.
  • a PCD 100 may have Reminder capabilities similar to those in personal assistant's on popular smartphones. Example: “At 3pm on December 5th, remind me to buy an anniversary gift" "OK, I'll remind you”. Reminders can be recurring to support things like medication reminders. Users may have the option to create the reminder as an audio or video recording, in which case the PCD 100 may need to prompt at the beginning of recording. The PCD 100 may summarize after the message has been created: For example, "OK, I'm going to remind John tomorrow when I see him [play audio] .” A reminder is just a special form of PCD Jot where a time is specified.
  • the PCD 100 may be able to remind known people (one or more for the same reminder) in the family about things. For example, "When you see Suzie, remind her to do her homework” or "At 6pm, remind Dad and Mom to pick me up from soccer practice.” If a reminder is given, the originator of the reminder should be notified on the social robot PCD link if he or she has a social robotLink device.
  • a reminder is just a special form of the PCD Jot where a time is specified.
  • a link may between a PCD 100 and a mobile device.
  • the PCD 100 may display message as soon as it sees the target person.
  • the PCD 100 may be able to send short text messages or audio/visual recordings to other PCD's in its directory, referred to herein as "Jots."
  • the PCD Jot messages may be editable, and the PCD Jot recordings may be able to play back and re-record before sending.
  • the PCD 100 may confirm for senders that the PCD Jot was successfully sent.
  • the PCD 100 may maintain a "sent" Jots folder for each member of the household, which can be browsed and deleted message by message. Sent Jots may be viewable and/or editable on PCD Link or the PCD 100.
  • the PCD may maintain a list of PCD animations, referred to herein as "robotticons,” akin to emojis used in screen-based devices, such as to give life to or enhance the liveliness of messages. Examples may include a cute wink for "hello” or “oO" for "uh-oh”.
  • the social robotticons can be elaborate, and certain specialized libraries may be available for purchase on the PCD Skills Store. Some PCD robotticons may be standalone animation expressions. Others may accommodate integration of a user video image/message.
  • the PCD robotticons may include any of the PCD's expressive capabilities (LED, bipity boops, or other sounds or sound effects, animation, etc.)
  • a family member may always ask the PCD 100 "play me my reminders [from [person]]" and the PCD 100 may respond by beginning playing from the earliest reminders for that person.
  • the PCD's screen may signify that there are reminders waiting. If the PCD sees the intended recipient of a PCD Jot, the PCD 100 may offer to play the Jot if the reminder did't been viewed within the last six hours, and the time of the reminder has now arrived. After viewing a message, the recipient may have an option to reply or forward, and then save or delete the message, or "snooze" and have the message replayed after a user defined time interval. Default action may be to save messages.
  • the PCD may maintain an inbox of the PCD Jots for each member of the household that may be scrolled.
  • an incoming PCD Jot may carry with it an identifier of the intended recipient.
  • the PCD 100 may only show messages to the intended recipient or other authorized users.
  • each member of the family may have their own color, and a flashing "message" indicator in that color let's that family member know the message is for them.
  • the paradigm should accommodate instances where there are different messages awaiting different members of the family. Whether a family member is authorized to view another family member's message may be configurable via Administrator.
  • the PCD 100 may be able to create to-do lists and shopping lists, which may be viewable and editable on the PCD Link. For example, users may be able to say "PCD, I need to sign Jenny up for summer camp" and the PCD 100 may respond "I've added 'sign Jenny up for summer camp' to your to-do list.” Or "PCD, add butter to my shopping list.” Lists may be able to be created for each family member or for the family at large. Each member of the family may have a list, and there may be a family list.
  • the PCD Jot may time out after a period of non-use.
  • the PCD may have a persistent "Be" state that engages in socially and character- based (emotive, persona model-driven behaviors) interactions, decisions, leanings with users. This state may modulate the PCD skills, personalize the PCD behavior and performance of these skills to specific users based on experience and other inputs.
  • the PCD 100 may have a single, distinct "powered off pose, as well as some different animation sequences that lead it to that pose when it is turned off.
  • the PCD 100 may have a single, distinct "Asleep” pose when it is plugged in or running on battery power as well as a number of different animation sequences that lead it to that pose after it gets a "sleep” command or if it decides to take a nap while disengaged.
  • the PCD 100 may have microphones and camera off, so that the PCD 100 does not see or hear when asleep in this mode. In the latter mode, a person may need to touch the robot or use a different modality than speech or visual input to wake up the PCD 100.
  • the PCD 100 may have several wake up animations corresponding to verbal or tactile "wake up” commands or turning the power on after more than 3 hours asleep or off between 11pm and 1 lam local time, for example.
  • the PCD 100 may have several different ways of “dreaming” while it is asleep. These are several different ways of “dreaming” while it is asleep. These are
  • Dreaming States may occur during -30% of sleep sessions that last longer than 15 minutes.
  • the PCD's dreams can be interrupted so that it goes into a silent sleep state with commands, or by touch screen, in the event people in the room find its dreams distracting.
  • the PCD 100 may notify users verbally and on-screen when its power level is below
  • the PCD 100 may notify users on-screen when its power source is switched between outlet and battery. It should also be able to respond to questions such as "Are you plugged in?" or "Are you using your battery?"
  • the PCD 100 may automatically power on or off when the button on the back of his head is pushed and held. A short button push puts the social robot to sleep.
  • the PCD 100 may be set to wake up from sleep via (voice or touch) or just touch. If the PCD 100 is on but not engaged in active interaction (i.e., in a base stated referred to herein as the "Be" or “being” state), the PCD 100 may exhibit passive awareness animations when someone enters its line or sight or makes a noise. These animations may lead to idling active awareness if the PCD 100 believes the person wants to engage.
  • the PCD 100 is passively aware of someone and believes that person wants to actively engage either because of a verbal command or because that person is deliberately walking toward the PCD 100, it may exhibit "at your service" type active awareness animations.
  • the PCD 100 may comment if it can't see because a foreign object is covering his eyes if it is asked to do anything that requires sight. If the PCD 100 is tapped on the head independent of any kind of prompt, it may revert to Idling Active Awareness. In other embodiments, if the PCD 100 is stroked or petted, or if it is praised verbally, it may exhibit a "delight" animation, and revert to Idling Active Awareness.
  • the PCD 100 may generally greet that family member in a personal way, though not necessarily verbally (which may depend on the recency of a last sighting of that family member).
  • the PCD may go into passive awareness mode. If it detects interest from the stranger, it should introduce itself without being repetitive. The PCD 100 may not proactively ask who the other person is since the "known family members" are managed by the PCD's family Administrator.
  • the PCD 100 If a recognized member of the PCD's family is with an unrecognized stranger, the PCD 100 first greet the family member personally. If that family member introduces the PCD 100 to the stranger, the PCD 100 may not proactively ask who the other person is since the "known family members" are managed by the social robot's family Administrator.
  • the social robot's family Administrator introduces the social robot to meet a new person and the Administrator proactively says he should remember the new person, the social robot should take up one of the 16 ID slots. If there are no available ID slots, the PCD 100 may ask the Administrator if he or she would like to replace an existing recognized person.
  • the PCD 100 collects the necessary visual and audio data, and may also suggest that the Administrator have the new person go through the PCD Link app to more optimally capture visual and audio samples, and learn name pronunciation.
  • the PCD 100 may have several forms of greetings based on the time of day. For example, “Good Morning” or “Good evening” or “You're up late.” If the PCD 100 knows the person it is greeting, it may frequently, but not always, be personalized with that person's name. [00493] If someone says goodbye to the PCD 100, it may have several ways of bidding farewell. If the PCD 100 knows the person saying goodbye, it may personalize the farewell with that person's name.
  • the PCD 100 may have some idle chatter capabilities constructed in such a way that they don't encourage unconstrained dialog. These may include utterances that aim for a user response, or simple quips designed to amuse the user but not beckoning a response. These utterances may refer to known "Family Facts" as defined in the Family Facts tab, such as wishing someone in the family "happy birthday". In embodiments, visual hints may be displayed on a screen as to what utterances the PCD 100 is expecting to hear, such as to prompt the user of the PCD 100. Utterances may also be geocentric based on a particular PCD's zip code.
  • Utterances may also be topical as pushed from the PCD Cloud by the design team such as "I can't believe Birdman swept the Academy Awards! ".
  • Quips may be humorous, clever, and consistent with the PCD's persona. Chatbot content should also draw from the PCD's memory of what people like and dislike based on what they've told it or what it gleans from facial expression reactions to things like pictures, songs, jokes, etc.
  • the PCD 100 may periodically ask family members questions designed to entertain.
  • the PCD 100 may have several elegant ways of expressing incomprehension that encourage users to be forgiving if it is unable to understand a user despite requests to repeat the utterance.
  • the PCD 100 may have severable likeable idiosyncratic behaviors it expresses from time to time, such as specific preferences, fears, and moods.
  • the PCD 100 may have a defined multimodal disambiguation paradigm, which may be designed to elicit patience and forgiveness from users.
  • the PCD 100 may have several elegant ways of expressing it understands an utterance but cannot comply or respond satisfactorily.
  • the PCD 100 may sometimes amuse itself quietly in ways that exhibit it is happy, occupied and not in need of any assistance.
  • the PCD 100 may have several ways to exhibit it is thinking during any latency incident, or during a core server update.
  • the PCD 100 may have several ways of alerting users that its WiFi connectivity is down, and also that WiFi has reconnected. Users can always reactivate WiFi from the settings and by using the QR code from the PCD Link. [00503]
  • the PCD 100 may have a basic multimodal navigation paradigm that allows users to browse through and enter skills and basic settings, as well as to exit active skills. Advanced settings may need to be entered via PCD Link.
  • the PCD 100 may have the ability to have its Administrator "lock” it out so that it cannot be engaged, beyond an apologetic notification that it is locked, without a password.
  • the PCD 100 may be able to display available WiFi networks on command.
  • the PCD 100 may display available WiFi networks if the WiFi connection is lost.
  • the PCD 100 may provide a way to enter the WiFi password on his screen.
  • the PCD 100 may have a visual association with each known member of the family. For example, Jim is always Blue, Jane is always Pink, Mom is always Green, and Dad is always Purple. When the PCD 100 interacts with that member of the family, that visual scheme should be dominant. This visual identifier can be used throughout the PCD's skills to ensure family members know the PCD 100 recognizes them.
  • the PCD 100 may recognize smiles and respond in a similar manner
  • the PCD 100 may play pictures from its PCD Snap photo album in slide show mode while it's in Be and if the user is in the picture, The PCD 100 may say "you look particularly good in this one". Sometimes the PCD 100 may look at its "own” photos, like of the first Macintosh, or R2D2, or pinball machines but then pictures of his family are included from time to time also.
  • the PCD 100 may often exhibit happiness without requiring interaction. For example, it plays pong with itself, draws pictures on its screen like the Mona Lisa with a PCD 100 as the face. Over time, these skills may evolve (e.g., starts with lunar lander ASCII game or stick figures then progresses to more complex games).
  • the PCD 100 may have a pet, such as a puppy, and its eye may become a ball the dog can fetch.
  • the PCD 100 may have passive back and forth with his dog. It may be browsing through its skills, such as reading cookbooks. It could be dancing to some kind of limited library of music, practicing its moves. Sometimes it is napping.
  • the PCD 100 may write poems, such as Haikus, based on family facts with gong. In other embodiments, the PCD 100 may be exercising and giving itself encouragement. In other embodiments, the PCD 100 may play instruments, watch funny you tube clips and chuckle in response, execute a color by numbers kids game, move to cause a ball to move through a labyrinth and play Sudoku. The PCD 100 may have its own photo album and collects stamps.
  • the PCD 100 may engage in and display a ping-pong based game wherein side to side movements control a user's paddle in play against the PCD 100. [00511] If the PCD 100 is running on battery power, there may be an icon on its screen showing remaining battery life.
  • the PCD 100 may engage with one person at a time. It may only turn to engage someone else if they indicate a desire to speak with the PCD 100 AND the person the PCD 100 is currently engaged with remains silent or otherwise disengages.
  • the PCD may use various non-verbal and paralinguistic social cues to manage multi-person interactions simultaneously.
  • the PCD 100 may have a basic timer functionality. For example "PCD, let me know when 15 minutes have passed.”
  • the PCD 100 may be able to create a tone on a phone that is connected to it via PCD Link to assist users in locating a lost phone that is within WiFi range.
  • the ability to control whether someone can create this tone on a PCD Linked phone that is not their own device may be configurable via Administrator settings.
  • the PCD 100 may have a stopwatch functionality similar to that used in current smartphones.
  • the PCD 100 may have a built in clock and be able to tell the time in any time zone if asked. Sometimes, The PCD 100 may display the time, other times it may not, based, at least in part, on its level of engagement and what it is doing.
  • the PCD 100 may have an alarm clock functionality. For example "The social robot, let me know when its 3:30pm". There may be a snooze function included.
  • the PCD 100 may have several alarm sounds available and each family member may set their preferred alarm sound. If no preferred alarm sound is set, the PCD 100 may select one.
  • the PCD 100 may have established multi-party interaction policy, which may vary by skill.
  • the PCD 100 may have a quick "demo reel” which it can show if asked to "show off its capabilities.
  • the PCD 100 may have specified but simple behavior options when it encounters and recognizes another PCD 100 by voice ID if it is introduced to another PCD 100 by a family member.
  • a PCD 100 may have specific, special behaviors designed for interacting with another PCD 100.
  • a given skill or behavior may manifest differently based on other attributes associated with a PCD 100.
  • the PCD 100 may be programmed or may adapt, such as through interactions over time with a user or group, to have a certain personality, to undertake a certain persona, to operate in a particular mode, to have a certain mood, to express a level of energy or fatigue, to play a certain role, or the like.
  • the PCD SDK may allow a developer to indicate how a particular skill, or component thereof, should vary based on any of the foregoing, or any combination of the foregoing.
  • a PCD 100 may be imbued with an "outgoing" personality, in which case it may execute longer, louder versions of speech behaviors, as compared to an "introverted" PCD 100 that executes shorter, quieter versions.
  • an "active" PCD 100 may undertake large movements, while a “quiet” one might undertake small movements when executing the same skill or behavior.
  • a "tired” PCD 100 might display sluggish movements, slow speech, and the like, such as to cue a child subtly that it is time for bed.
  • a social robot platform including an SDK, that allows development of skills and behaviors, wherein the skills and behaviors may be expressed in accordance with a mode of the PCD 100 that is independent of the skill.
  • the PCD 100 may adapt to interact differently with distinct people, such as speaking to children differently from adults, while still maintaining a distinct, consistent persona.
  • Important skills include meeting skills (including for first and subsequent meetings, such as robot-augmented video calls), monitoring skills (such as monitoring people and/or pets in the home), photographer skills, storytelling skills (and multi-media mashups, such as allowing a user to choose at branch point to influence the adventure plot, multi-media performance-based stories, and the like), game-playing skills, a "magic mirror” skill that allows a user to use the social robot as an intelligent mirror, a weather skill, a sports skill, or sports buddy skill that interacts to enhance a sports program or sports information or activity like fantasy sports, a music skill, a skill for working with recipes, serving as an intelligent interactive teleprompter with background/animation effects, and a coaching skill (such as for medication compliance, personal development, training, or the like).
  • meeting skills including for first and subsequent meetings, such as robot-augmented video calls
  • monitoring skills such as monitoring people and/or pets in the home
  • photographer skills such as storytelling skills (and multi-media mashups, such as allowing a user to choose
  • the methods and systems disclosed herein may undertake beam forming.
  • a challenge is that one may desire to allow a user to call attention of the social robot, such as by using a "hot phrase," such as "Hey, Buddy.” If the PCD 100 is present, it may turn (or direct attention), to the voice that uttered the hot phrase.
  • One way to do that is to use beam forming, where there are beams (spatial filters or channels) that point to different locations. Theoretically each spatial filter or channel, corresponding to a beam, take sound from that channel and seeks to disregard the other channels.
  • the methods and systems disclosed herein may undertake improved beam forming and utilization, such as in order to pick up the beam of the person who says the hot phrase.
  • the social robot platform disclosed herein may have a distinct instance of the speech recognizer for each beam, or for a sub-set of beams. Thus, each speech recognizer is listening to a cone of space. If the device is among, for example, a group of four people, and one person says "Hey Buddy," the device will then see that someone is calling attention from the direction of that speaker. To implement that, the systems and methods may have a speech recognizer per channel or subset of channels.
  • the system that is running the beam forming may receive information from the motor controllers or may receive location or orientation from an external system, such as a GPS system, a vision system or visual inputs, or a location system in an environment such as a home, such as based on locations of IOT devices.
  • the motor controllers may know the angle at which the PCD 100 rotates the PCD 100, then the PCD 100 may need to find its coordinates. This may be accomplished by speaking the hot phrase again to re-orient it, or by taking advantage of other location information.
  • Person tracking may be used once a speaker is located, so the PCD 100 may move and turn appropriately to maintain a beam in the direction of the speaker as the speaker moves, and other perceptual modalities may augment this, such as tracking by touch, by heat signature, or the like.
  • integration of the sound localization and the visual cues may be used to figure out which person is trying to speak to the PCD 100, such as by visually determining facial movement.
  • one may also deploy an omnidirectional "low resolution" vision system to detect motion in the room, then direct a higher quality camera to the speaker.
  • the methods and systems disclosed herein may use tiled grammars as part of phrase spotting technology.
  • phrase spotting one may preferably have short phrases, but the cost of building phrase spotting is higher depending on how many different phrases one must recognize. To distinguish between, for example, ten contents, the more you have different distinct phrases, the costlier it becomes (geometrically).
  • the methods and systems disclosed herein may break the phrases into different recognizers that run simultaneously in different threads, so each one is small and costs less. Now one may introduce a series of things, since the concept of phrase spotting lets you find content-bearing chunks of speech.
  • phrase spotting thread For example, take the phrase: "Hey Buddy, I want to take a picture and send it to my sister.” Two chunks likely matter in most situations: "take a picture” and "send it to my sister.”
  • recognizers can build a graph of recognizers (not just a graph of grammars, but actual recognizers), each of which recognizes particular types of phrases. Based on the graph, a recognizer can be triggered by an appropriate parent recognizer that governs its applicability and use.
  • an automated speech recognition system with a plurality of speech recognizers working in parallel, the speech recognizers optionally arranged according to a graph to permit phrase spotting across a wide range of phrases.
  • the social robot Software Developer Kit is a web-based application tool adapted to give developers an easy way to build social robot skills (e.g., robot applications).
  • the social robot SDK facilitates skill development via a number of primary components that are compatible with, among other things, Atom/Electron and Node.js. These include animation, behavior creation, skill simulation, natural language interactions with non-verbal and paralinguistic social queues, and robot maintenance.
  • the SDK one has access to many aspects of the social robot, including the social robot's speech technology, facial recognition and tracking, touch input technology, movement systems, vision systems and the like, as well as higher level functions and skills of the social robot that are built using those.
  • the social robot SDK may be downloaded from an Atom Package Manager via a suitable user interface.
  • the Social robot Atom package (called social robot-sale) may be built on AtomTM, an Integrated Developer Environment (IDE) that is built on ElectronTM.
  • Social robot skills may be written in JavaScript, for example and run on ElectronTM.
  • the social robot Atom package includes a simple UI and may provide access to Chrome DevToolsTM, a library of pre-created animations, behaviors, images, and sound effects, and tools and markup language for designing high expressive, character-rich performance of spoken lines with emotive overlays and non-verbal or paralinguistic cues plus the ability to create skills from scratch.
  • Chrome DevToolsTM a library of pre-created animations, behaviors, images, and sound effects, and tools and markup language for designing high expressive, character-rich performance of spoken lines with emotive overlays and non-verbal or paralinguistic cues plus the ability to create skills from scratch.
  • the social robot command line interface may provide the ability to generate and deploy skills directly through a command line interface.
  • Both the social robot Atom package and the social robot CLI may provide access to the social robot Simulator, which allows one to preview animations, 3D body movements, interactions, expressions, and other manifestations of skills before sending them to a social robot.
  • the social robot's animation system may be responsible for coordinating expressive output across the robot's entire body, including motors, light ring, eye graphics and the like.
  • the system may support playback of scripted animations, as well as real-time procedurally rendered expressive behaviors such as expressive look-at and orientation behaviors. Additionally, the system ensures that the robot transitions smoothly from pose to pose or from the end of one animation or expressive behavior into the next.
  • the major elements of the PCD SDK may include:
  • DOFs Degrees of Freedom
  • Animation Editor - graphical interface for creating animation files interactively using distinct animation layers
  • Behavior Editor permits editing of a tree of behaviors for each skill
  • Speech Rules Editor create rules to be used when speech is detected via the robot's Natural Language Understanding or listening capability
  • a PCD Software Developer Kit (SDK) 2704 may operate as a web- based application tool adapted to give developers an easy way to build PCD Skills (robot applications).
  • the PCD SDK consists of four primary components some of which may have distinct user interfaces, tool sets, APIs and the like.
  • the primary components include an animation component 2706, a behavior creation component 2702, a skill simulation component, and a robot maintenance component.
  • PCD Skills may be directly in the component user interfaces or by making direct API calls in a programming language, such as JavaScript.
  • the PCD SDK may include user interfaces and may provide access to a library of pre-created animations, behaviors, images, and sound effects, and the like plus the ability to create skills from scratch.
  • the PCD SDK may facilitate generating skills that the PCD may perform.
  • skills may require adherence to a basic structure that the SDK may facilitate creating.
  • the structure described below may be initiated by default when one creates a new skill via the SDK's skill generator capabilities such as the behavior and animation editors described herein.
  • Each skill may be required to have an index.html file that implements the user interface (UI) of the skill and contains elements such as a link to a style file, a tag that defines a division or section of code, and an id of PCD containing all UI elements.
  • UI user interface
  • Each skill may also be configured with a main script that is responsible for exporting one or more skill functions during execution of the skill by the PCD.
  • Actions of a skill may be configured with the PCD SDK with certain margins or paddings before and/or after the actions to facilitate realistic transitioning from one action to another, such as from an action of taking a photograph to displaying the captured image.
  • the PCD SDK facilitates configuring skills, transitions among skills, and relevant PCD expressions.
  • the PCD command line interface may provide the ability to generate and deploy skills directly through a command line interface.
  • Both the PCD Atom package and the PCD CLI may provide access to the PCD Simulator, which allows one to preview animations, 3D body movements, interactions, and expressions before sending them to a PCD.
  • behavior editor 2702 in the SDK 2704 may include trees, nodes, leafs, parents, decorators and the like. Behaviors may be accessible in the SDK 2704 as classes that can be configured, ordered, simulated, and deployed with the SDK 2704. Exemplary behavior classes include Blink, Execute Script, ExecuteScriptAsync, Listen, ListenEmbedded, ListenJs, LookAt, Nul, Parallel, PlayAnimation, PlayAudio, Point3D, Random, ReadBarcode, Sequence, Subtree, SubtreeJs, Switch, TakePhoto, TextToSpeech, TextToSpeechJs, TimeoutJs, and the like.
  • Behavior trees are an expressive tool which may be used to model the behavior and control flow of autonomous agents. They are popular in the robotics and video game industries for their ability to coordinate concurrent actions and decision-making processes. Unlike state machines (where there is a single active state at any given time) behavior trees can run multiple behaviors in parallel. This makes them very powerful tools for coordinating all of the PCD sensory input with expressive output. Behavior trees are hierarchical, unlike state machines, which are represented as graphs.
  • Each created skill may contain the main.bt behavior tree by default. This is the behavior tree that executes when the skill is run.
  • a behavior is a node in the behavior tree that performs an action. This could be a very simple action (like playing an audio file) or a complex one (like asking someone's name).
  • IN PROGRESS The behavior is actively performing some action like playing an animation, or speaking using text-to-speech.
  • a leaf behavior has no children. It is responsible for performing one action, like playing an audio file or making PCD speak.
  • leaf behavior plays an animation, takes a photo, pauses for a timeout, and executes JavaScript code.
  • a parent behavior may have children. Its children may be either leaf behaviors or other parent behaviors. In the SDK 2704, there are four core parent behaviors.
  • Sequence A Sequence will play its children in order from top to bottom. While any of its children are an IN PROGRESS state, the Sequence will also be in an IN PROGRESS state. If all of a Sequence's children return with status SUCCESS, the Sequence will return with status SUCCESS. As soon as any of its children return with status FAILED, then the Sequence will immediately return with status FAILED.
  • a Parallel will remain IN PROGRESS until either one of its children fails (in which case the Parallel will return status FAILED) or until all of its children have SUCCEEDED (in which case the Parallel will return with status SUCCESS).
  • Switch A Switch behavior is how behavior trees deal with branching logic. A Switch will test all of its children in sequence until one succeeds, at which point the Switch will execute that child and then return with status SUCCEEDED. A Switch will always succeed even if all of its children fail.
  • Random A Random behavior will choose one of its children at random. If that child fails, then Random will fail. It the child succeeds, then Random will succeed. [00547] With reference to FIG. 30, there is illustrated an exemplary and non-limiting embodiment of both sequence and parallel parent behaviors.
  • the Sequence parent behavior executes its children in order from top to bottom.
  • the Parallel parent behavior executes all its children at the same time.
  • Decorators are not nodes in the tree. They are components that can be added onto a behavior to modify the state of that behavior. Decorators can do four things:
  • Example decorator classes accessible through the SDK 2704 include:
  • FailOnCondition that explicitly interrupts a behavior that it is decorating if a condition being evaluated is met.
  • StartOnAnimEvent that begins execution of its behavior when an animation fires an event from its event layer.
  • StartOnCondition that prevents the behavior that it is decorating from starting until a condition that it is evaluating is met.
  • StartOnEvent that prevents the behavior that it is decorating from starting until an event is emitted from a behavior tree's global emitter.
  • TimeoutFail that forces the behavior it is decorating to fail after the specified amount of time.
  • TimeoutSucceed that forces the behavior that it is decorating to success after the specified amount of time
  • WhileCondition that will evaluate a condition after its component successfully succeeding. If the condition being evaluated is met, the component is started again. If the condition is not met, the status of the component is delivered (returned) to the function that activated the decorator.
  • FIG. 31 there is illustrated an exemplary and non-limiting embodiment of a Case decorator preventing the Sequence behavior from starting until a condition is met.
  • the StartOnAnimEvent decorator prevents the TakePhoto behavior from starting until an event has been emitted from the camera animation in the Play Animation behavior.
  • FIG. 32 there is illustrated an exemplary and non-limiting embodiment of a user interface rendering of a main behavior tree of a skill.
  • Each skill contains the main behavior tree by default. This is the behavior tree that executes when the skill is run. One may create additional behavior trees in a skill for main.bt to reference.
  • Behaviors execute from the top to bottom of a behavior tree unless otherwise specified.
  • TextToSpeech leaf behavior as a child of the Parallel parent, which is a child of the Sequence parent.
  • the TextToSpeech behavior has a StartOnAnimEvent decorator and a description, there are three types of behaviors that can be added to a behavior tree: parent behaviors, leaf behaviors, and decorators.
  • Parent behaviors can have child behaviors. Children may be leaf behaviors or other parent behaviors.
  • Sequence executes its children in order from top to bottom.
  • Switch executes the first child that succeeds from top to bottom.
  • Random executes a random one of its children.
  • leaf behaviors cannot contain other behaviors. They execute a single action, like playing an audio file or making PCD speak. [00558] As illustrated, decorators modify when a behavior starts, succeeds, or fails. They can also restart behaviors.
  • FIG. 34 there is illustrated an exemplary and non-limiting embodiment of a decorator configured to change a state of a behavior based on a measureable condition.
  • FIG. 35 there is illustrated an exemplary and non-limiting embodiment of a user interface for specifying arguments of a behavior.
  • the PCD's animation system may be responsible for coordinating expressive output across the robot's entire body, including motors, light ring, eye graphics and the like.
  • the system may support playback of scripted animations, as well as expressive gaze and orientation behaviors. Additionally, the system ensures that the robot transitions smoothly from pose to pose or from the end of one animation or expressive behavior into the next.
  • FIG. 36 depicts an illustration of the lifecycle of builders and instances across configuration, activation, and run/control
  • builders may be used to store configuration information and other data, and then may be used (and re-used) to spawn active instance handles.
  • a few examples of builders and instances may include AnimationBuilders spawning Animationlnstances, LookatBuilders spawning Lookatlnstances, TransitionBuilders spawning "in-between" motions, and the like.
  • a builder 3602 may be configured during a configuration process step 3604. During an active process step 3608, a builder 3602 may receive a start input that may initiate the builder creating an instance 3610 of the builder based on, for example, current parameters.
  • a builder 3602 may stand ready to be reused while the initiated instance 3610 may perform actions such as providing motor/graphic/LED values to operational elements of the PCD.
  • an initiated instance of a builder may be used only once.
  • FIG. 37 that depicts a diagram that provides a map of the robot's individual degrees of freedom (DOF)s, DOF value types, and common DOF groupings
  • dynamic elements that are synchronized via the animate module may be represented as degrees-of-freedom.
  • DOFs may represent an animate module's smallest separable units that a user of the PCD SDK may control. Some DOFs control the rotation of the PCD's body motors, other DOFs control color channels of the LED light ring, and others control graphical parameters related to the on-screen eye and overlay. While DOFs can be controlled individually, they are typically grouped with other DOFs into DOF sets. Commonly-used DOF sets are provided, for example, in resources available through the SDK.
  • FIG. 37 provides a map of the robot's individual DOFs (in italics), DOF value types, and common DOF groupings (in CAPS):
  • an animate module of the PCD SDK may follow a policy of exclusive DOF ownership by a most recently triggered instance.
  • instance A which is controlling the robot's body, LED, and eye
  • Instance B which is configured to control only the PCD's body DOFs.
  • Instance B As soon as Instance B is triggered, it assumes exclusive control over the robot's body, while Instance A continues to control the robot's LED and eye.
  • animation operation control may be adjusted by parameters or arguments such as those depicted in FIG. 39 including setSpeed 3902, setNumLoops 3904, or related methods.
  • transitions ensure smooth motion from the end of one animated behavior into the beginning of the next.
  • the SDK may automatically generate a transition motion 4004 from the PCD's current state (e.g., its body state) to the start of the selected animation.
  • This transition motion may be inserted in a skill flow before the animation instance, as illustrated in FIG. 40.
  • the transition motion may have a short (or even zero) duration if the PCD current state and the PCD state for the selected animation already line up well.
  • an AnimationBuilder 4002 function of the PCD SDK may come pre-configured with a system-default TransitionBuilder 4004 that may be used to generate the inbound transition for the animation being defined by the AnimationBuilder 4002.
  • This default behavior may be modified via the builder's setTransitionln function.
  • This function may be set via a command, such as animationBuilder.setTransitionIn(transitionBuilder) or via a user interface of the SDK.
  • the SDK may offer at least two basic types of transition builders. For example, LinearTransitionBuilders may generate transition motions using simple linear blending, while AccelerationTransitionBuilders may generate motions that obey configurable acceleration limits. Transition builders may be created from scratch, or by cloning and modifying an existing transition; the SDK may facilitate any of these modes of generating and customizing transitions.
  • an animation such as an instance of an animation as described herein that may include, for example playback of an animation sequence may be monitored via a number of different events, including STARTED, STOPPED, and CANCELLED, plus other custom events.
  • Event listeners for monitoring active animation instances may be installed using an event listening feature of the AnimationBuilder.
  • aSTARTED event fires when the animation instance begins, after the completion of any inbound transition motion, if applicable (e.g., the event fires at animation-time 0).
  • the STOPPED event fires when the animation instance finishes or gets completely interrupted.
  • a STOPPED event's interrupted property can be checked to differentiate these cases.
  • the CANCELLED event fires if the animation instance is removed or completely interrupted before it is ever STARTED .
  • Animations may continue to produce events as long as they remain in control of at least one of the robot's DOFs.
  • the STOPPED event fires when no DOFs remain for a particular animation.
  • FIG. 42 depicts a timeline of events 4202 produced by two overlapping animation instances.
  • Animation A 4204 controls DOFs for body, LED, and eye.
  • Animation B 4208 controls only the body DOF, so it interrupts animation A body DOF, but does not interrupt Animation A LED or eye animations.
  • Events 4202 are produced for Animation A 4204 LED and eye DOFs while Animation B 4208 produces events for the body DOF.
  • Gaze and orientation capabilities of the PCD may be triggered to produce expressive gaze behaviors. These behaviors may be used to expressively direct PCD's body and/or eye towards locations of interest in the surrounding environment.
  • Gaze behaviors may be managed in a fashion similar to animations. Behavior triggering is accomplished using LookatBuilder objects. First, one creates a builder using the createLookatBuilder method. Next, one may optionally configure the builder. Finally, one triggers an instance of the behavior via the builder's startLookat method.
  • Gaze behaviors may operate in one of two modes: single-shot mode or continuous mode. This mode may be configured via the builder's setContinuousMode method. In single- shot mode, a gaze behavior is much like a scripted animation, expressively orienting the robot towards the selected target and then stopping once the target is reached. In continuous mode, the behavior never stops on its own, and the target location may be repeatedly modified using the instance's updateTarget method. This may be useful for face tracking or other situations where the PCD is following a moving target.
  • FIG. 43 depicts configuring a single-shot mode gaze operation. In continuous mode, the target point may be updated at any time.
  • FIG. 44 depicts including custom code with the PCD SDK with the LookAt function to toggle between two different gaze targets every three seconds.
  • FIG. 45 that depicts a three dimensional coordinate system of the social robot referenced by the software development kit
  • providing targets to the gaze API may sometimes require doing math with 3D vectors. It may be preferred to use a preconfigured module for 3D vector arithmetic and other linear algebra operations.
  • the PCD may use a 3D coordinate system with its origin 4502 at the center of the robot's base. From the PCD's perspective, the positive X axis 4504 points forward, the positive Y axis 4508 points left, and the positive Z axis 4510 points up, as illustrated in the diagram of FIG. 45.
  • Animations may be created via this interface. Animations may be configured and stored in individual animation files.
  • the animation editor of FIG. 46 may include a body pane 4602, an eye pane 4604, and a timeline pane 4608. Each pane may be zoomed; the color of each pane may be changed; the panes of the animation editor may be reset.
  • the body pane 4602 facilitates direct control of the body segments through point, click, and drag functionality.
  • the timeline pane 4608 may be used to adjust animation length by adjusting the default animation of 30 frames at a frame rate of 30 frames per second.
  • Timelines in the timeline pane may be scaled, such as by a factor of two and the like.
  • Certain frames in the timeline may be locked by setting a keyframe parameter.
  • Frames in the timeline pane 4608 may be edited with commands such as cut, paste, copy, delete and the like.
  • Frames on a time line may be operated via a simulation function so that the body pane 4602 and/or the eye pane 4604 may depict the sequence of animation frames selected in the timeline pane 4608. Simulations may be operated continuously, one frame at a time, or until a stop command is entered, and the like.
  • the Animation Editor enables the creation of animations via layers. One selects a layer in the Animation Editor in order to create an animation for that layer.
  • multiple layers of the same type will blend.
  • Some layers are additive. For example, a layer that moves the PCD's eye up by 1 and a layer that moves the eye down by 2 will blend to create an animation that moves the eye down by 1.
  • Other layers are multiplicative. For example, a layer that scales the PCD's eye by 3 and a layer that scales the PCD's eye by 2 will blend to create an animation that scales the PCD's eye by 6. Layers that do not blend will default to the animations applied on the top layer.
  • FIG. 47 a display screen in which violation of limits of the PCD may be indicated.
  • animation errors can occur if a movement violates acceleration limits or velocity limits. These messages are just warnings.
  • the PCD may still attempt to perform the movements, but the hardware may not be able to support those movements.
  • Limits are based on the additive accumulation of all body layers. For example, if one adds an excessively fast keyframe on one Body layer, but cancels it out on another Body layer, the resulting animation is allowed and the errors will not appear.
  • Tweening Handling of motion between frames is referred to herein as "tweening.” For example, a linear tween will move the PCD's body at a steady pace between keyframes. Animation tweens can be adjusted to ease in and/or out of a keyframe more slowly to mimic natural movement. Tweening affects how the animation moves from the keyframe on which it's applied to the next set keyframe (left to right on the Timeline).
  • FIG. 48 that depicts a user interface for configuring an eye layer for controlling the representative eye image of the social robot
  • the user interface can be used to manipulate the size, position, shape, and rotation of a PCD's eye. Tweening can also be applied to eye animations.
  • the eye of the PCD can be resized, scaled with or without constrained proportions, reshaped, rotated and the like.
  • the user interface of FIG. 48 facilitates direct control of the eye through point, click, and drag functionality.
  • the PCD eye layer configuration user interface may include a body pane 4902, an eye pane 4904, and a timeline pane 4908. Simulations of animations may be depicted in the body pane 4902 and may include body, light ring, eye, and audio simulation.
  • the eye pane 4904 may provide both control and simulation of animation of the eye 4910 including at least color and texture animation. Colors interpolate from animation frame to frame through the RGB space and may be additive. Colors may be blended on top of textures; therefore, configuring the eye texture to other than the default white will impact how each color is depicted due to the impact of the underlying texture color.
  • a user interface for configuring an eye overlay layer for controlling the representative eye image of the social robot is depicted.
  • the user interface may include a body pane 5002, an eye overlay pane 5004, and a timeline pane 5008.
  • the Overlay layer appears on the PCD's screen in front of his eye. Overlays can be modified in the eye overlay layer user interface.
  • the overlay is constructed substantially identically to the eye, including substantially all of the same properties.
  • Eye overlay layers may be added, removed, moved, tweened independently of the eye, resized with or without constrained proportions, reshaped and rotated.
  • FIG. 51 that depicts a user interface for configuring an eye overlay texture layer for controlling the representative eye image of the social robot
  • a texture for an eye overlay can be configured.
  • the user interface of FIG. 51 enables the overlay texture to be changed, the overlay color to be changed, and the like.
  • textures on different layers may not blend, colors may blend through RGB space and are additive. If an overlay has a colored texture, color manipulations may blend with the texture color as well.
  • a display background layer texture may be configured.
  • a background texture layer may appear on the PCD screen behind the eye. The background texture can be changed as well as the background texture color.
  • FIG. 53 that depicts a user interface for configuring an LED disposed around a body segment of the social robot
  • the LED can be controlled for aspects such as on/off, on-intensity, on-color, color temperature, color saturation, rate of change, and the like. These elements may be selected and or entered in the LED configuration user interface of FIG. 53.
  • Events are described herein above in regards to animation operational control for stopping, starting, interrupting and the like.
  • event layers may also be configured and/or customized through the PCD SDK user interfaces.
  • FIG. 54 depicts a user interface for configuring an event with information such as the event name and a payload for testing a condition of an associated animation upon activation of the event.
  • Events may also link animations with behaviors that are described in reference to Figs. 27-35 herein. To create a behavior that occurs during a specific frame of an animation, one first creates an event. Events link animations and behaviors. The behavior editor material included herein describe how to use events in the Behavior Tool. Multiple event layers are allowed. All events will be executed; events set to the same frame will be executed in order from top layer to bottom layer.
  • audio may be added to an animation by associating an audio file or other content with an animation.
  • the audio event layer user interface also includes controls, such as a scrubber icon to allow a user to select a portion of an audio file to associate with an animation.
  • FIG. 56 there is illustrated an exemplary and non-limiting embodiment of a speech rule called "reservations.rule”, created in the PCD SDK Speech Rules Editor 5600.
  • the PCD exhibits Natural Language Understanding (NLU).
  • NLU Natural Language Understanding
  • Embedded rules are built into the SDK. Additionally, phrase-spotting is built into PCD's software, so there is no requirement to reach up to the cloud to process speech.
  • the ListenEmbedded behavior or SucceedOnEmbedded decorator may be used to do so by specifying 'Hey PCD' as the ruleName in the arguments, and the PCD will return the string hey PCD' when it hears someone say hey PCD'.
  • the PCD SDK may come preloaded with one or more rules, such as a rule that recognizes a speaker saying "Hey PCD".
  • the PCD may come configured to listen for these rules; no rule coding is needed.
  • FIG. 57 there is illustrated an exemplary and non-limiting embodiment of the PCD idling until someone says hey PCD,' at which point the PCD stops idling and uses text-to-speech to say hello.
  • a custom rule may be created to cause the PCD to listen for something other than what's available through embedded rules. This may be accomplished via the use of, for example, Listen behavior, ListenJs behavior, SucceedOnListen decorator, or SucceedOnListenJs decorator to accomplish this.
  • PCD listens for auditory input and returns string variables based on what he hears.
  • Speech rules in the SDK currently only return string values, even if PCD hears a number. Support for returning numbers and integers coming soon.
  • TopRule ( [00605] reserve me a flight ⁇ book- air 1 ⁇
  • PCD listens for the phrase 'reserve me a flight.' If he hears that exact phrase, the SDK returns a variable named book with value air. This isn't extremely useful though. Let's expand the rule to add a bit more complexity to it. Instead of only listening for
  • the Listen behavior and SucceedOnListen decorator take file arguments in which one selects a rule file.
  • FIG. 58 there is illustrated an exemplary and non-limiting embodiment of creating custom listening rules for PCD using files of the ".rule" type.
  • FIG. 59 there is illustrated an exemplary and non-limiting embodiment of telling the PCD to listen for hey PCD'.
  • FIG. 60 there is illustrated an exemplary and non-limiting embodiment of creating dynamic listening rules for the PCD using JavaScript.
  • the PCD SDK facilitates configuring and using mechanisms for introducing variability, handling errors, and coordinating Voice User Interface (VUI) and Graphical User Interface (GUI) interactions.
  • the PCD SDK provides Multimodal Interaction Modules (MIMs) to remove the burden from developers and to provide consistency across skills.
  • MIMs handle interactions that have more than one mode, including voice (VUI) and touchscreen (GUI) modes.
  • VUI voice
  • GUI touchscreen
  • MIM behavior comprises a state machine that is configured across a variety of parameters including the MIM type (e.g., question, announcement, optional response, and the like), a speech recognition rule used to parse the user's utterances, text to speech (TTS) prompts, prompts when an audio input is expected but is not received (e.g., the user has not responded to a prompt), unmatched prompt responses, and the like.
  • MIMs Multi Modal Interaction Modules
  • TTS Text-to-Speech
  • the PCD SDK MIM Editor allows you to match up TTS prompts with speech recognition rule files and API input.
  • MIMs can be configured with any number of prompts.
  • MIM Behavior executes, it picks one of the available prompts from the appropriate category.
  • the prompt can be chosen randomly or using logic supplied in the Condition field.
  • multiple Entry-Core prompts can be defined with Condition fields that filter them based on available data.
  • GUI Graphical User Interface
  • the CheckLights MIM in FIG. 61 uses a YesNo rule to process the user's utterances.
  • the PCD is able to present Yes and No touchscreen buttons on his screen. Tapping them produces the same result as saying "yes” or "no.”
  • the Failures to Trigger GUI field is set to 1. This tells the PCT to present the GUI only if there is a NoMatch error, meaning that the speaker is having trouble communicating with the PCD via voice. When this value is set to 0 the PCD will always present the GUI.
  • MIMs are configured with a speech recognition rule to parse the user's utterances.
  • ASR automatic speech recognition
  • NLU Natural Language Understanding
  • the parser looks for patterns that are defined in a rule file associated with the MIM. Rules may comply with a rule syntax that enables multiple responses to comply with a MIM expectation. In an example, a rule may be configured so that an expected response of YES may include "right", "I'm good", and the like.
  • FIG. 62 depicts a user interface of the PCD SDK that facilitates editing MIM rules.
  • a result object is displayed in the result pane 6208 for a given rule in the rule pane 6204.
  • the PCD SDK also includes an API for MIMs.
  • This API may enable timing out when a reply is expected but is not received.
  • Various features of a MIM may be controlled by the API including entry string, match strings, no_match strings, repeat, thanks strings, verbose response strings, and the like.
  • the PCD can be operated through the use of control flows that are made up of activities such as flow-executor activities, MIM activities, and behaviors.
  • FIG. 63 depicts an exemplary flow editor of the PCD SDK.
  • PCD control flows may be interpreted by a flow engine executing on the PCD.
  • the PCD SDK includes a flow editor tool for creating and editing robot skills in a flowchart-like fashion. Flows can be configured in the flow editor tool by adding activities and controls for the activities. These controls are called flow-executor activities and are used to control activities in the flow.
  • Flow-executor activities can be used to begin a flow, execute parallel flows, end flows, access another flow, execute a behavior tree, interrupt a flow, skip around in a flow using throws and catches, execute arbitrary JavaScript, and the like.
  • Other activities that can be added to a flow include MIM activities and behavior activities.
  • Flows can also include transition processing that facilitates using results of one activity in the flow to determine the next activity to perform.
  • the PCD SDK may be used to create skills and assets for operating the PCD.
  • the basic steps for creating a skill include using the SDK user interface to open a new skill project; create an animation; create a behavior tree to perform activities, such as body movement, eye activity, and LED ring illumination; create a speech rule that directs the PCD to listen; create one or more MIMs to facilitate multi-modal interaction in response to the PCD applying the speech rule; create a GUI menu to display on the display screen of the PCD; create a flow that brings these created elements together into a coherent sequential flow of activities; use the simulator to validate that the flow of activities that comprise the skill is as expected; download the skill to the PCD.
  • a behavior editor may facilitate configuring a hierarchical structure of PCD behaviors.
  • a behavior editor may facilitate controlling behavioral activity of the PCD based on a behavioral tree structure that may be comprised of nodes, leafs, parent behaviors, and the like.
  • a behavior editor may facilitate configuring PCD behaviors in a hierarchical behavior tree that orders behavior in at least one of sequential and parallel order.
  • a behavior tree comprises control sequences of a PCD to facilitate coordinating actions of PCD processing resources and decision-making processes.
  • a behavior editor may enable a user to control at least one perceptual system and at least one expressive system of a plurality of expressive systems of the PCD.
  • a behavior tree editor may facilitate controlling PCD behaviors based on at least four distinct behavior states including an invalid behavior state, an in progress behavior state, a succeeded behavior state, and a failed behavior state.
  • An invalid behavior state may indicate that a specific behavior has not yet started.
  • An in progress behavior state may indicate that the PCD is actively performing a specific behavior.
  • a succeeded behavior state may indicate that the PCD has finished performing a specific behavior based on a determination that the behavior was successful.
  • a failed behavior state may indicate that the PCD has finished performing a specific behavior based on a determination that the behavior was unsuccessful.
  • a behavior editor may facilitate configuring a leaf behavior as a functional element of a behavior tree defining a PCD behavior in a hierarchical structure of behaviors, wherein the leaf behavior has no lower level behaviors.
  • a behavior editor may facilitate configuring a set of PCD behaviors a combination of parent and child behaviors, wherein a child behavior is performed by the PCD after completion of a corresponding parent behavior.
  • a parent behavior may be configured with a behavior editor to perform one or more child behaviors individually, such as in a predefined sequence, or in parallel.
  • a child behavior may be selected by a parent behavior in a behavior tree based on at least one of sequential child behavior execution, parallel child behavior execution, switched child behavior execution, and random child behavior execution.
  • a behavior editor may facilitate configuring at least one behavior decorator for a PCD behavior being configured by the behavior editor.
  • a behavior decorator may operate on the PCD to modify a state of its corresponding behavior.
  • a PCD SDK may include a user interface through which a behavior tree may be configured and associated with a PCD skill so that the behaviors in the associated behavior are executed when the skill executes on the PCD based on the behavior tree properties that are configured by a user with the behavior editor.
  • a behavior editor may comprise a behavior tool suite of a PCD SDK.
  • a behavior tool suite of a PCD SDK may comprise a behavior editor.
  • a behavior editor may comprise a plurality of behavior user interfaces adapted to at least one of create, define, configure, simulate, revise, and deploy a behavior to a PCD.
  • An animation tool suite of a PCD SDK may include an animation editor adapted to facilitate controlling assets of a PCD, the assets of the PCD including at least one of a multi- axis body of a plurality of moveable segments, a light source, a display screen, and an audio output system.
  • An animation editor may facilitate controlling PCD assets during transitions between animation actions.
  • An animation editor may facilitate configuring a set of PCD animation actions based on animation builders that spawn execution by the PCD processor of at least one animation instance derived from a specific animation builder module. The at least one spawned animation instance controls assets of the PCD. Control of at least one of the assets is exclusively controlled by the animation instance while the animation instance is executing on the PCD.
  • An animation tool suite of a PCD SDK may instantiate an animation builder in an execution sequence of a PCD.
  • An animation tool suite may provide access to control a plurality of degrees of freedom of a PCD, wherein each degree of freedom is associated with an actionable physical aspect of the PCT.
  • Degrees of freedom of a PCD may be associated with one or more moveable body segments of the PCD.
  • Degrees of freedom of a PCD may be associated with an electronic display screen of the PCD.
  • Degrees of freedom of a PCD may be associated with a light source, such as a light ring disposed on one or more body segments of a PCD.
  • Degrees of freedom of a PCD may include an illustrated eye displayed on the electronic display of the PCD.
  • An animation tool suite may facilitate assigning one or more degrees of freedom of a PCD to an animation instance.
  • An animation tool suite may facilitate changing control of a degree of freedom of a PCD to between distinct animation instances.
  • An animation tool suite may facilitate control of speed, interactions, and the like of animations of one or more degrees of freedom of a PCD.
  • An animation tool suite for controlling degrees of freedom of a PCD may facilitate animation control based on events associated with an animation instance including started, stopped and cancelled animation events.
  • An animation tool suite of a PCD SDK may include user interface control elements displayed in an electronic display of a computing device that may facilitate expressive gaze animation control of the PCD. Expressive gaze animation may be performed as a single movement to a gaze point that may be defined in a three-dimensional space relative to the PCD .
  • expressive gaze animation may be performed as a continuous movement with gaze target tracking that facilitates animating the PCD to continuously gaze at the gaze target.
  • An animation editor of a PCD SDK such as one associated with an animation tool suite may include a body pane in which a visual representation of the PCD is depicted, an eye pane in which a visual representation of an eye displayed on the electronic display screen of the PCD is depicted, and a timeline pane in which animation actions are depicted as in a timeline of animations, transitions, and the like.
  • a multi-modal interaction module (MIM) editor for introducing variability, handling errors, and the like during interactions between a PCD and a human participant may facilitate controlling more than one mode of interaction.
  • a MIM editor may facilitate configuring a plurality of multi-modal prompts to be produced by the PCD in response to a perceived condition, such as a spoken response, tactile input, a lack of response, and the like.
  • a MIM editor facilitates controlling multi-modal responses during interactions with a participant, such as a human, another PCD, and the like.
  • a MIM editor may facilitate controlling a display screen of the PCD so that it coordinates with audio output by the PCD, wherein the display screen is adapted to receive tactile input and associate that input with one of a plurality of response options based on a visual presentation on the display screen.
  • a MIM editor may facilitate configuring a speech recognition rule that may control how a PCD's Natural Language Understanding processor interprets utterances from a human in proximity to the PCD.
  • a MIM editor may include a user interface that may include a phrase input field, a result object display pane, and a speech recognition rule pane.
  • a MIM editor may further facilitate recovery from errors or unrecognized responses through enabling user configuration of no-match and no-input actions to be taken by the PCD.
  • the methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor.
  • the processor may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform.
  • a processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like.
  • the processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a coprocessor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon.
  • the processor may enable execution of multiple programs, threads, and codes.
  • the threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application.
  • methods, program codes, program instructions and the like described herein may be implemented in one or more thread.
  • the thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code.
  • the processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere.
  • the processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere.
  • the storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like.
  • a processor may include one or more cores that may enhance speed and performance of a multiprocessor.
  • the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die).
  • the methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware.
  • the software program may be associated with a server that may include a file server, print server, domain server, Internet server, intranet server and other variants such as secondary server, host server, distributed server and the like.
  • the server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like.
  • the methods, programs or codes as described herein and elsewhere may be executed by the server.
  • other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server.
  • the server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope.
  • any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions.
  • a central repository may provide program instructions to be executed on different devices.
  • the remote repository may act as a storage medium for program code, instructions, and programs.
  • the software program may be associated with a client that may include a file client, print client, domain client, Internet client, intranet client and other variants such as secondary client, host client, distributed client and the like.
  • the client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like.
  • the methods, programs or codes as described herein and elsewhere may be executed by the client.
  • other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client.
  • the client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope.
  • any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions.
  • a central repository may provide program instructions to be executed on different devices.
  • the remote repository may act as a storage medium for program code, instructions, and programs.
  • the methods and systems described herein may be deployed in part or in whole through network infrastructures.
  • the network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art.
  • the computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like.
  • the processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements.
  • the methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells.
  • the cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network.
  • FDMA frequency division multiple access
  • CDMA code division multiple access
  • the cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like.
  • the cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types.
  • the methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices.
  • the mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices.
  • the computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices.
  • the mobile devices may communicate with base stations interfaced with servers and configured to execute program codes.
  • the mobile devices may communicate on a peer to peer network, mesh network, or other communications network.
  • the program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server.
  • the base station may include a computing device and a storage medium.
  • the storage device may store program codes and instructions executed by the computing devices associated with the base
  • the computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g.
  • RAM random access memory
  • mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types
  • processor registers cache memory, volatile memory, non-volatile memory
  • optical storage such as CD, DVD
  • removable media such as flash memory (e.g.
  • USB sticks or keys floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like.
  • the methods and systems described herein may transform physical and/or or intangible items from one state to another.
  • the methods and systems described herein may also transform data representing physical and/or intangible items from one state to another.
  • machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like.
  • the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions.
  • the methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application.
  • the hardware may include a general purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device.
  • the processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory.
  • the processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It may further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium.
  • the computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high- level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions.
  • a structured programming language such as C
  • an object oriented programming language such as C++
  • any other high- level or low-level programming language including assembly languages, hardware description languages, and database programming languages and technologies
  • each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof.
  • the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware.
  • the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • Computer Hardware Design (AREA)
  • Power Engineering (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Manipulator (AREA)
  • Stored Programmes (AREA)
  • Toys (AREA)

Abstract

A development platform for developing a skill for a persistent companion device (PCD) includes an asset development library having an application programming interface (API) configured to enable a developer to at least one of find, create, edit and access one or more content assets utilizable for creating a skill, an expression tool suite having one or more APIs via which are received one or more expressions associated with the skill as specified by the developer wherein the skill is executable by the PCD in response to at least one defined input, a behavior editor for specifying one or more behavioral sequences of the PCD for the skill and a skill deployment facility having an API for deploying the skill to an execution engine of the PCD.

Description

PERSISTENT COMPANION DEVICE CONFIGURATION
AND DEPLOYMENT PLATFORM
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. provisional patent application Ser. No. 62/316,247 [JIBO-0004-P01], filed March 31, 2016. All the above applications are incorporated herein by reference in their entirety.
BACKGROUND
[0002] Field of the invention
[0003] The present application generally relates to a persistent companion device. In particular, the present application relates to an apparatus and methods for providing a companion device adapted to reside continually in the environment of a person and to interact with a user of the companion device to provide emotional engagement with the device and/or associated with applications, content, services or longitudinal data collection about the interactions of the user of the companion device with the companion device.
[0004] Description of the Related Art
[0005] While devices such as smart phones and tablet computers have increasing capabilities, such as networking features, high definition video, touch interfaces, and applications, such devices are limited in their ability to engage human users, such as to provide benefits of companionship or enhanced emotional experience from interacting with the device. A need exists for improved devices and related methods and systems for providing companionship.
SUMMARY
[0006] The present disclosure relates to methods and systems for providing a companion device adapted to reside continually in the environment of a person and to interact with a user of the companion device to provide emotional engagement with the device and/or associated with applications, content, services or longitudinal data collection about the interactions of the user of the companion device with the companion device. The device may be part of a system that interacts with related hardware, software and other components to provide rich interaction for a wide range of applications as further described herein.
[0007] In accordance with an exemplary and non-limiting embodiment, a development platform for developing a skill for a persistent companion device (PCD) comprises an asset development library having an application programming interface (API) configured to enable a developer to at least one of find, create, edit and access one or more content assets utilizable for creating a skill that is executable by the PCD, an expression tool suite having one or more APIs via which receive one or more expressions associated with the skill as specified by the developer wherein the skill is executable by the PCD in response to at least one defined input, a behavior editor for specifying one or more behavioral sequences of the PCD for the skill and a skill deployment facility having an API for deploying the skill to an execution engine for executing the skill.
[0008] In accordance with an exemplary and non-limiting embodiment, a platform for enabling development of a skill using a software development kit (SDK) comprises a logic level module configured to map received inputs to coded responses and a perceptual level module comprising a vision function module configured to detect one or more vision function events and to inform the logic level module of the one or more detected vision function events, a speech/sound recognizer configured to detect defined sounds and to inform the logic level module of the detected speech/sounds and an expression engine configured to generate one or more animations expressive of defined emotional/persona states and to transmit the one or more animations to the logic level module.
[0009] Skill development platform methods and systems include a system for developing a skill for a persistent companion device (PCD). The system may include an asset development library that is accessible via an application programming interface (API) executing on a processor, configured to enable a developer to at least one of find, create, edit and access one or more content assets utilizable for creating a skill that is executable by the PCD. The system may also include an animation tool suite executing on the processor and having one or more APIs via which operation of one or more physical elements of the PCD for the skill including at least two of an electronic display, a plurality of movable body segments, a speech output system, and a multi-color light source are specified by the developer, wherein the skill is executable by the PCD in response to at least one input that is defined by the developer. They system may also include a behavior editor executing on the processor for specifying one or more behavioral sequences of the PCD for the skill. Additionally, the system may include a skill deployment facility executing on the processor and adapted for deploying the skill to an execution engine for executing the skill. In this system, the skill deployment facility may deploy the skill via an API. Additionally, the behavior editor may facilitate operation of a sensory input system and an expressive output system of the PCD.
[0010] Skill development SDK methods and systems may include a system for enabling development of a persistent companion device (PCD) skill using a software development kit (SDK) that may include a logic level mapping system operating on a processor configured to map received inputs to the PCD to coded responses. The system may also include a PCD behavior tool suite operating on the processor adapted to configure a perceptual engine of the PCD; the tool suite including a vision function system configured via the behavior tool suite to detect one or more vision function events and to inform the logic level mapping system of the one or more detected vision function event, and a speech/sound recognition and understanding system configurable by the behavior tool suite to detect defined sounds and to inform the logic level mapping system of the detected speech/sounds. The system may also include a PCD animation tool suite operating on the processor adapted to configure an expression engine to generate one or more animations expressive of at least one defined state in response to at least one input and to transmit the one or more animations to the logic level mapping system for mapping of the animations to the inputs. In this system, the defined state may be at least one of an emotional state, a persona state, a cognitive state, and a state expressing a defined level of energy.
[0011] A PCD software development kit may include user interface methods and systems may include a system for configuring a persistent companion device (PCD) to perform a skill. The system may include a software development kit executing on a networked server. The SDK may include a plurality of animation user interface screens through which a user configures animation associated with the skill, the plurality of user interface screens facilitating specification of the operation of physical elements of the PCD including at least two of an electronic display, a plurality of movable body segments, a speech output system, and a multicolor light source. The SDK may also include a plurality of behavior user interface screens through which the user configures behavior of the PCD for coordinating robot actions and decisions associated with the skill, the plurality of behavior user interface screens facilitating operation of an expressive output system of the PCD in response to a sensory input system of the PCD. Also, a graphical representation of the PCD in at least one of the animation user interface screens and the behavior user interface screens represents the movement of the PCD in response to inputs based on the configuration by the user. Additionally, the SDK may include a gaze orientation user interface screen through which a user configures the PCD to expressively orient a display screen of the PCD toward a target located in proximity to the PCD as a point in a three-dimensional space relative to the PCD, the PCD responding to the target in at least one of a single-shot mode and a continuous target-tracking mode.
[0012] Methods and systems may include a system for animating a persistent companion device (PCD) that may include an animation editor executing on a networked server providing access to PCD animation configuration and control functions of the PCD via a software development kit. The system may also include an electronic interface to a PCD, the PCD configured with a plurality of interconnected moveable body segments, motors for rotation thereof, at least one light ring, an electronic display screen, and an audio system. Additionally, the system may include a PCD animation application programming interface via which the animation editor controls at least a portion of the features of the PCD. Also, the system may include a plurality of animation builders configurable by a user of the animation editor, the animation builders spawning animation instances that indicate active animation sessions. The system may further include a behavior transition system for specifying transition of the PCD from a first animation instance to a second animation instance in response to a signal.
[0013] Methods and systems described herein may include a system for controlling behaviors of a persistent companion device (PCD). The system may include a behavior editor executing on a networked server providing access to PCD behavior configuration and control functions of the PCD via a software development kit. The system may also include a plurality of behavior tree data structures accessible by the behavior editor that facilitate controlling behavior and control flow of autonomous robot operational functions, the operational functions including a plurality of sensor input functions and a plurality expressive output functions, wherein the plurality of behavior tree data structures organize control of robot operational functions hierarchically, wherein at least one behavior tree data structure is associated with at least one skill performed by the PCD. The system further including a plurality of behavior nodes of each behavior tree, each of the plurality of behavior nodes associated with one of four behavior states consisting of an invalid state, an in-progress state, a successful state, and a failed state. The system also including at least one parent behavior node of each behavior tree, the at least one parent node referencing at least one child behavior node and adapted to initiate at least one of sequential child behavior node operation, parallel child behavior node operation, switching among child behavior nodes, and randomly activating a referenced child behavior node. In this system, at least a portion of the behavior nodes are each configured with a behavior node decorator that functions to modify a state of its behavior node by performing at least one of preventing a behavior node from starting, forcing an executing behavior node to succeed, forcing an executing behavior node to fail, re-executing a behavior node.
[0014] Methods and systems described herein may include a system for recognizing speech with a persistent companion device (PCD). The system may include a PCD speech recognition configuration system that facilitates natural language understanding by a PCD, the system comprising a plurality of user interface screens by which a user operates a speech rule editor executing on a networked computer to configure speech understanding rules comprising at least one of an embedded rule and a custom rule. The system may further include a development kit comprising library of embedded speech understanding rules accessed by the user via the networked server. Additionally, the system may include a robot behavior association function of the software development by which a user associates speech understanding rules with at least one of a listen-type PCD behavior and a listen success decorator that the user configures to cause the PCD to perform an operation based on a successful result of a condition tested by the listen success decorator.
[0015] Methods and systems described herein may include a persistent companion device (PCD) control configuration system. The system may include a PCD animation configuration system that facilitates controlling expressive output of the PCD through playback of scripted animations, responsive operation of the PCD for events detected by event listeners that are configurable by a user, and a plurality of animation layers that facilitate specifying animation commands. The system may further include a PCD behavior configuration system that facilitates controlling mechanical and electronic operation of the PCD. The system may also include a PCD gaze orientation configuration system that facilitates determining directional activity of the gaze of the PCD by specifying a target and a gaze PCD functional mode of at least one of single-shot and target-tracking. Additionally, the system may include a PCD speech recognition configuration system comprising a plurality of embedded rules for recognizing human speech, and a user interface for customizing rules for recognizing human speech, wherein the human speech is captured by an audio sensor input system of the PCD. In this system, controlling mechanical and electronic operation of the PCD through robot behavior comprises controlling transitions between animated behaviors, controlling a plurality of animated behaviors in at least one of parallel control and sequential control, and controlling a plurality of child behaviors based on a behavior tree of parent and child behaviors, wherein a child behavior is activated based on one of a switch condition for selecting among the child behaviors and randomly selecting among the child behaviors.
BRIEF DESCRIPTION OF THE FIGURES
[0016] In the drawings, which are not necessarily drawn to scale, like numerals may describe substantially similar components throughout the several views. Like numerals having different letter suffixes may represent different instances of substantially similar components. The drawings illustrate generally, by way of example, but not by way of limitation, a detailed description of certain embodiments discussed in the present document.
[0017] FIG. 1 illustrates numerous views of PCD.
[0018] FIG. 2 illustrates software architecture of the PCD. [0019] FIG. 3 illustrates architecture of a psycho-social interaction module (PSIM).
[0020] FIG. 4 illustrates a task network that shows a simplified version of a greeting interaction by the PCD.
[0021] FIG. 5 illustrates hardware architecture of the PCD.
[0022] FIG. 6 illustrates mechanical architecture of the PCD.
[0023] FIG. 7 illustrates a flowchart for a method to provide a call answering and messaging service.
[0024] FIG. 8 illustrates a flowchart for a method to relay a story by the PCD.
[0025] FIG. 9 illustrates a flowchart for a method to indicate and/or influence emotional state of a user by use of the PCD.
[0026] FIG. 10 illustrates a flowchart for a method to enable story acting or animation feature by the PCD.
[0027] FIG. 11 illustrates a flowchart for a method to generate and encode back stories.
[0028] FIG. 12 illustrates a flowchart for a method to access interaction data and use it to address a user's needs.
[0029] FIG. 13 illustrates a flowchart for a method to adjust behavior of the PCD based on user inputs.
[0030] FIG. 14 illustrates an example of displaying a recurring, persistent, or semi-persistent, visual element.
[0031] FIG. 15 illustrates an example of displaying a recurring, persistent, or semi-persistent, visual element.
[0032] FIG. 16 illustrates an example of displaying a recurring, persistent, or semi-persistent, visual element.
[0033] FIG. 17 illustrates an exemplary and non-limiting embodiment of a runtime skill for a PCD.
[0034] FIG. 18 is an illustration of an exemplary and non-limiting embodiment of a flow and various architectural components for a platform enabling development of a skill using the SDK.
[0035] FIG. 19 is an illustration of an exemplary and non-limiting embodiment of a user interface that may be provided for the creation of assets.
[0036] FIG. 20 is an illustration of exemplary and non-limiting screen shots of a local perception space (LPS) visualization tool that may allow a developer to see the local perception space of the PCD. [0037] FIG. 21 is an illustration of a screenshot of a behavior editor according to an exemplary and non-limiting embodiment.
[0038] FIG. 22 is an illustration of a formal way of creating branching logic according to an exemplary and non-limiting embodiment.
[0039] FIG. 23 is an illustration of an exemplary and non-limiting embodiment whereby select logic may be added as an argument to a behavior.
[0040] FIG. 24 is an illustration of an exemplary and non-limiting embodiment of a simulation window.
[0041] FIG. 25 is an illustration of an exemplary and non-limiting embodiment of a social robot animation editor of a social robot expression tool suite.
[0042] FIG. 26 is an illustration of an exemplary and non-limiting embodiment of a PCD animation movement tool.
[0043] FIG. 27 depicts a block diagram of an architecture of a social robot-specific software development kit.
[0044] FIG. 28 depicts a behavior tree snippet in which two behaviors are executed at the same time, then a second behavior, then a third behavior
[0045] FIG. 29 depicts a leaf behavior
[0046] FIG. 30 depicts a user interface display of sequential and parallel parent behaviors.
[0047] FIG. 31 depicts a user interface display of a behavior decorator.
[0048] FIG. 32 depicts a main behavior tree of a skill.
[0049] FIG. 33 depicts a user interface of a behavior editor for editing a behavior tree leaf.
[0050] FIG. 34 depicts a decorator configured to change a state of a behavior based on a measureable condition.
[0051] FIG. 35 depicts a user interface for specifying arguments of a behavior.
[0052] FIG. 36 depicts an illustration of the lifecycle of builders and instances across configuration, activation, and run/control.
[0053] FIG. 37 depicts a diagram that provides a map of the robot's individual DOFs, DOF value types, and common DOF groupings.
[0054] FIG. 38 depicts an animate module following a policy of exclusive DOF ownership by the most recently triggered animate instance.
[0055] FIG. 39 depicts an alternate embodiment for the embodiment of FIG. 38.
[0056] FIG. 40 depicts configuring a transaction with an animation.
[0057] FIG. 41 depicts timing of exemplary core animation events. [0058] FIG. 42 depicts a timeline of events produced by two overlapping animation instances.
[0059] FIG. 43 depicts an example of a look -at orientation configuration interface.
[0060] FIG. 44 depicts including custom code in a behavior tree node for toggling between two different look-at targets.
[0061] FIG. 45 depicts a three coordinate system of the social robot referenced by the software development kit.
[0062] FIG. 46 depicts a user interface of the software development kit for editing animations.
[0063] FIG. 47 depicts a user interface for configuring a body layer for controlling the segments of the social robot.
[0064] FIG. 48 depicts a user interface for configuring an eye layer for controlling the representative eye image of the social robot.
[0065] FIG. 49 depicts a user interface for configuring an eye texture layer for controlling a texture aspect of the representative eye image of the social robot.
[0066] FIG. 50 depicts a user interface for configuring an eye overlay layer for controlling the representative eye image of the social robot.
[0067] FIG. 51 depicts a user interface for configuring an eye overlay texture layer for controlling the representative eye image of the social robot.
[0068] FIG. 52 depicts a user interface for configuring a background layer for controlling the background of a representative eye image of the social robot.
[0069] FIG. 53 depicts a user interface for configuring an LED disposed around a body segment of the social robot.
[0070] FIG. 54 depicts a user interface for configuring event.
[0071] FIG. 55 depicts a user interface for configuring an audio event layer.
[0072] FIG. 56 depicts a user interface of a speech rules editor.
[0073] FIG. 57 depicts an alternate speech rules editor user interface screen.
[0074] FIG. 58 depicts a listen behavior editor user interface screen.
[0075] FIG. 59 depicts an alternate listen behavior editor user interface.
[0076] FIG. 60 depicts another alternate listen behavior editor user interface screen
[0077] FIG. 61 depicts a MIM configuration user interface.
[0078] FIG. 62 depicts a MIM Rule editor user interface.
[0079] FIG. 63 depicts a flow editor of the PCD SDK. DETAILED DESCRIPTION
[0080] In accordance with exemplary and non-limiting embodiments, there is provided and described a Persistent Companion Device (PCD) for continually residing in the environment of a person/user and to interact with a user of the companion device. As used herein, "PCD" and "social robot" may be used interchangeably except where context indicates otherwise. As described more fully below, PCD provides a persistent, social presence with a distinct persona that is expressive through movement, graphics, sounds, lights, scent. There is further introduced below the concept of a "digital soul" attendant to each embodiment of PCD. As used herein, "digital soul" refers to a plurality of attributes capable of being stored in a digital format that serve as inputs for determining and executing actions by a PCD. As used herein, "environment" refers to the physical environment of a user within a proximity to the user sufficient to allow for observation of the user by the sensors of a PCD.
[0081] This digital soul operates to engage users in social interaction and rapport-building activities via a social-emotional/interpersonal feel attendant to the PCD's interaction/interface. As described more fully below, PCD 100 may perform a wide variety of functions for its user. In accordance with exemplary and non-limiting embodiments described in detail below, PCD may (1) facilitate and supporting more meaningful, participatory, physically embedded, socially situated interactions between people/users and (2) may engage in the performance of utilitarian tasks wherein PCD acts as an assistant or something that provides a personal service including, but not limited to, providing the user with useful information, assisting in scheduling, reminding, providing particular services such as acting as a photographer, to help the family create/preserve/share the family stories and knowledge (e.g., special recipes), etc., and (3) entertaining users (e.g., stories, games, music, and other media or content) and providing company and companionship.
[0082] In accordance with exemplary and non-limiting embodiments, various functions of PCD may be accomplished via a plurality of modes of operation including, but not limited to: i. Via a personified interface, optionally expressing a range of different personality traits, including traits that may adapt over time to provide improved companionship. ii. Through an expressive, warm humanized interface that may convey information as well as affect. As described below, such an interface may express emotion, affect and personality through a number of cues including facial expression (either by animation or movement), body movement, graphics, sound, speech, color, light, scent, and the like. iii. Via acquiring contextualized, longitudinal information across multiple sources (sensors, data, information from other devices, the internet, GPS, etc.) to render PCD increasingly tailored, adapted and tuned to its user(s).
iv. Via adaptive self-configuring/self-healing to better match the needs/wants of the user. v. Via considering the social and emotional particulars of a particular situation and its user.
[0083] With reference to FIG. 1, there is illustrated numerous views of PCD 100 according to exemplary and non-limiting embodiments. As illustrated, PCD 100 incorporates a plurality of exemplary input/sensor devices including, for example, capacitive sensors 102, 102. One or more Capacitive sensors 102 may operate to sense physical social interaction including, but not limited to, stroking, hugging, touching and the like as well as potentially serving as a user interface . PCD 100 may further incorporates touch screen 104 as a device configured to receive input from a user as well as to function as a graphic display for the outputting of data by PCD 100 to a user. PCD 100 may further incorporate one or more cameras 106 for receiving input of a visual nature including, but not limited to, still images and video. PCD 100 may further incorporate one or more joysticks 108 to receive input from a user. PCD 100 may further incorporate one or more speakers 110 for emitting or otherwise outputting audio data. PCD 100 may further incorporate one or more microphones 112.
[0084] PCD Software Architecture
[0085] With reference to FIG. 2, there is illustrated a block diagram depicting software architecture200 according to exemplary and non-limiting embodiments. The software architecture 200 may be adapted to technologies such as artificial intelligence, machine learning, and associated software and hardware systems that may enable the PCD 100 to provide experience to life as an emotionally resonant persona that may engage people through a robotic embodiment as well as through connected devices across wide range of applications.
[0086] In accordance with exemplary and non-limiting embodiments, the intelligence associated with the PCD 100 may be divided into one or more categories that may encode the human social code into machines. In some embodiments, these one or more categories may be a foundation of a PCD's cognitive -emotive architecture. The one or more categories may include but not limited to psycho-social perception, psycho-social learning, psycho-social interaction, psycho-social expression and the like. The psycho-social perception category of intelligence may include an integrated machine perception of human social cues (e.g., vision, audition, touch) to support natural social interface and far-field interaction of the PCD 100. The psycho-social learning category may include algorithms through which the PCD 100 may learn about people's identity, activity patterns, preferences, and interests through direct interaction and via data analytics from the multi -modal data captured by the PCD 100 and device ecosystem. The PCD may record voice samples of people entering its near or far field communication range and make use of voice identification systems to obtain identity and personal data of the people detected. Further, the PCD may detect the UUID broadcasted in the Discovery Channel of BLE enabled devices and decode personal data associated with the device user. The PCD may use the obtained identity and personal data to gather additional personal information from social networking sites like Facebook, Twitter, Linkedln, or similar. The PCD may announce the presence and identity of the people detected in its near or far field communication range along with a display of the constructed personal profile of the people.
[0087] The psycho-social interaction category may enable the PCD 100 to perform proactive decision making processes so as to support tasks and activities, as well as rapport building skills that build trust and emotional bond with people - all through language and multi-modal behavior. The psycho-social expression category of the intelligence may enable the PCD 100 to orchestrate its multi -modal outputs to "come to life", to enliven content, and to engage people as an emotionally attuned persona through an orchestra of speech, movement, graphics, sounds and lighting. The architecture 200 may include modules corresponding to multi-modal machine perception technologies, speech recognition, expressive speech synthesis, as well as hardware modules that leverage cost effectiveness (i.e., components common to mobile devices). As illustrated in FIG. 1, there is provided one or more software subsystems within the PCD 100 and these one or more subsystems will be described in more detail below.
[0088] Psycho-Social Perception
[0089] The psycho-social perception of the PCD 100 may include an aural perception that may be used to handle voice input, and a visual-spatial perception that may be used to assess the location of, capture the emotion of, recognize the identity and gestures of, and maintain interaction with users. The aural perception of the PCD 100 may be realized using an array of microphones 202, one or more signal processing techniques such as 204 and an automatic speech recognition module 206. Further, the aural perception may be realized by leveraging components and technologies created for the mobile computing ecosystem with unique sensory and processing requirements of an interactive social robot. The PCD 100 may include hardware and software to support multi-modal far-field interaction via speech using the microphone array 202 and noise cancelling technology using the signal processing module 204a, as well as third-party solutions to assist with automatic speech recognition module 206 and auditory scene analysis. [0090] The PCD 100 may be configured to adapt to hear and understand what people are saying in a noisy environment. In order to do this, a sound signal may be passed through the signal processing module 204a before it is passed into the automatic speech recognizer (ASR) module 206. The sound signal is processed to isolate speech from static and dynamic background noises, echoes, motors, and even other people talking so as to improve the ASR's success rate.
[0091] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to use an array of at least 4 MEMS microphones in a spatial configuration. Further, a sound time-of- arrival based algorithm (referred herein to as a beam-forming algorithm) may be employed to isolate sound in a particular direction. Using multiple (e.g., all six) microphone signals, a direction vector, and the placement of the microphones, the beam-forming algorithm may isolate sound coming from a particular spatial source. The beam-forming algorithm may be able to provide information about multiple sources of sound by allowing multiple beams simultaneously. In addition, a speech-non speech detection algorithm may be able to identify the speech source, and provide spatial localization of the speaker. In some embodiments, the beam-forming information may be integrated with a vision and awareness systems of the PCD 100 so as to choose the direction, as well as motor capability to turn and orient. For example, a 3D sensor may be used to detect location of a person's head in 3D space and accordingly, the direction may be communicated to the beam-forming algorithm which may isolate sounds coming from the sensed location before passing that along to the ASR module 206.
[0092] During operation, the PCD 100 may generate sound either by speaking or making noises. The signal processing module 204a may be configured to prevent these sounds from being fed back through the microphone array 202 and into the ASR module 206. In order to remove speaker noise, signal processing module 204a may employ algorithms that may subtract out the signal being fed to the speaker from the signal being received by the microphone. In order to reduce harmonically-rich motor noise, the PCD 100 may be configured to implement mechanical approach and signal processing techniques.
[0093] In some embodiments, the PCD 100 may monitor different ports of a motor so as to address the noise generated from these parts of the motor. In an example, the PCD 100 may be configured to mount the motor in an elastomeric material, which may absorb high frequencies that may be produced by armature bearings in the form of a whirring sound. The motor may include brushes that may produce a hissing sound, which is only noticeable when the motor is rotating at high speeds. Accordingly, the PCD 100 may exhibits animations and movements at a relatively low speed so as to avoid the hissing sound. Additionally, the PCD 100 may be configured to implement a lower gear ratio and further, by reducing the speed of the motor so as to the hissing sound. Typically, a lower quality PWM drives, like those found in hobbyist servos, may produce a high-pitched whine. The PCD 100 may be configured with good quality PWM drives so as to eliminate this part of the motor noise. Generally, gears of the motor may cause a lower pitched grinding sound, which accounts for the majority of the motor noise. The final gear drive may bear the most torque in a drive train, and is thus source of the most noise. The PCD 100 may be configured to replace the final gear drive with a friction drive so as to minimize this source of noise. In addition, the PCD 100 may be configured to employ signal processing techniques so as to reduce noise generated by the motor. In an embodiment, the microphone may be placed next to each motor so that noise signal may be subtracted from the signals in the main microphone array 202.
[0094] An output of the audio pipeline of the PCD 100 may feed the cleaned-up audio source into the ASR module 206 that may convert speech into text and possibly into alternative competing word hypotheses enriched with meaningful confidence levels, for instance using ASR's n-best output or word-lattices. The textual representation of speech (words) may then be parsed to "understand" the user's intent and user's provided information and eventually transformed into a symbolic representation (semantics). The ASR module 206 may recognize speech from users at a normal volume and at a distance that corresponds to the typical interpersonal communication distance. In an example, the distance may be near to 5-6 feet or greater dependent on a multitude of environmental attributes comprising ambient noise and speech quality. In an example, the speech recognition range should cover an area of a typical 12 ft. by 15 ft. room. The signal fed to the ASR module 206 will be the result of the microphone-array beam-forming algorithm and may come from an acoustic angle of about +/- 30 degrees around the speaker. The relatively narrow acoustic angle may allow to actively reducing part of the background ambient noise and reverberation, which are the main causes of poor speech recognition accuracy. In a scenario where the speech signal is too low, for instance due to the speaker being too far from the microphones, or the speaker speaking too softly, the PCD 100 may proactively request the speaker to get closer (e.g., if the distance of the speaker is available as determined by the 3D sensor) or to speak louder, or both. In some embodiments, the PCD 100 may be configured to employ a real-time embedded ASR solution which may support large vocabulary recognition with grammars and statistical language models (SLMs). Further, the acoustic ASR models may be trained and/or tuned using data from an acoustic rig so as to improve speech recognition rates. [0095] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to include a natural language processing layer that may be sandwiched between the ASR module 206 and an interaction system of the PCD 100. The natural language processing layer may include natural language understanding (NLU) module that may take the text generated by the ASR and assign meaning to that text. In some embodiments, the NLU module may configured to adapt to formats such as augmented backus-naur form (BNF) notation, java speech grammar format (JSGF), or speech recognition grammar format (SRGF), which may be supported by the above mentioned embedded speech recognizers. As more and more user utterances are collected, the PCD 100 may gradually transform traditional grammars into statistical grammars that may provide higher speech recognition and understanding performance, and allow for automatic data-driven adaptation.
[0096] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to design a structured interaction flow (based on the task network representation adopted for brain of the PCD 100) using multimodal dialog system user interface design principles for each interaction task. The interaction flow may be designed to receive multimodal inputs (e.g. voice and touch) sequentially (e.g. one input at a time) or simultaneously (e.g. inputs may be processed independently in the order they are received) and to generate multimodal outputs (e.g. voice prompts, PCD's movements, display icons and text). An as example and not as a limitation, the PCD 100 may ask a yes/no question, an eye of the PCD 100 may morph into a question mark shape with yes/no icons that may be selected by one or more touch sensors. In an embodiment, the PCD 100 may be adapted to process natural language interactions that may be expressing the intent (e.g. Hey! Let's take a picture!). In an embodiment, interactions may be followed in a "directed dialog" manner. For instance, after the intent of taking a picture has been identified, the PCD 100 may ask directed questions, either for confirming what was just heard or asking for additional information (e.g. Do you want me to take a picture of you?).
[0097] Visual-Spatial Perception
[0098] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to employ one or more visual-spatial perception sensors such as a RGB camera 212, a depth camera 214 and other sensors so as to receive 2D vision, 3D Vision, or sense motion or color. The PCD 100 may be configured to attain emotion perception of the user in the surrounding environment. For example, the PCD 100 may detect an expressed emotional state of each person. The PCD 100 may include a visual-spatial perception subsystem to keep track of the moment-to-moment physical state of users and the environment. This subsystem may present the current state estimate of users to the other internal software modules as a dynamically updated, shared data structure called the Local Perceptual Space (LPS) 208. The LPS may be built by combining multiple sensory input streams in a single 3D coordinate system centered on a current location of the PCD 100, while sensors may be registered in 3D using kinematic transformations that may account for his movements. In an embodiment, the LPS 208 may be designed to maintain multiple 'levels' of information, each progressing to higher levels of detail and may require processing and key sensor inputs. The LPS 208 levels may include:
[0099] Person Detection: This level may detect persons present in nearby surroundings. For example, the PCD 100 may calculate the number of nearby persons using the sensors. In an embodiment, a visual motion queue in the system may be employed to orient the PCD 100. Further, pyroelectric infrared (PIR) sensing and a simple microphone output may be integrated to implement wake up on the microcontroller so that the system can be in a low-power 'sleep' state, but may still respond to someone entering the room. This may be combined with visual motion cues and color segmentation models to detect the presence of people. The detection may be integrated with the LPS 208.
[00100] Person Tracking: The PCD 100 may be configured to locate the person in 3D and accordingly, determine the trajectory of the person using sensors such as vision, depth, motion, sound, color, features & active movement. For example, a combination of visual motion detection and 3D person detection may be used to locate the user (especially their head/face). Further, the LPS 208 may be adapted to include temporal models and other inputs to handle occlusions and more simultaneous people. In addition to motion and 3D cues, the system may learn (from moving regions and 3D) a color segmentation model (Naive Bayes) online from images to adaptively separate the users face and hands from the background and combine the results of multiple inputs with the spatial and temporal filtering of the LPS 208 to provide robust person location detection for the system.
[00101] Person Identification: The PCD 100 may identify a known and an unknown person using vision sensors, auditory sensors or touch inputs for person ID. In an example, one or more open source OpenCV libraries may be used for face identification module. In addition, person tracking information and motion detection may be combined to identify a limited set of image regions that are candidates for face detection.
[00102] Pose/Gesture Tracking: The PCD 100 may identify pose or posture of each person using visual classification (e.g., face, body pose, skeleton tracking, etc.), or touch mapping. In an embodiment, 3D data sets may be used to incorporate this feature with the sensor modalities of the PCD 100. In an example, an open source gesture recognition toolkit may be adopted for accelerating custom gesture recognition based on visual and 3D visual feature tracking.
[00103] Attention Focus: The PCD 100 may be configured to determine focus area so that the PCD 100 may point to or look at the determined focus area. Various sensors may be combined into set of locations/directions for attention focus. For example, estimated location of people may generate a set of attention focus locations in the LPS 208. These may be the maximum likelihood locations for estimations of people, along with the confidence of the attention drive for the given location. The set of focus points and directions are rated by confidence and an overall summary of LPS 208 data for use by other modules is produced. The PCD 100 may use these focus points and directions to select gaze targets so as to address users directly and to 'flip its gaze' between multiple users seamlessly. Additionally, this may allow the PCD 100 robot to look at lower-confidence locations to confirm the presence of nearby users.
[00104] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to include activity estimation in the system or may incorporate more sensor modalities for tracking and identification by voice input as well as estimation of emotional state from voice prosody. The LPS 208 may combine data from multiple inputs using grid- based particle filter models for processed input features. The particle filters may provide support for robust online estimation of the physical state of users as well as a representation for multiple hypothesis cases when there is significant uncertainty that must to be resolved by further sensing and actions on the PCD's part. The particle filtering techniques may also naturally allow a mixture of related attributes and sensory inputs to be combined into a single probabilistic model of physically measurable user state without requiring an explicit, closed form model of the joint distribution. Further, Grid based particle filters may help to fuse the inputs of 3D (stereo) and 2D (vision) sensing in a single coordinate system and enforce the constraint that the space may be occupied by only one object at any given time.
[00105] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to include heuristic proposal distributions and heuristic transition models that may help capture model user state over time even when the PCD 100 may not be looking at them directly. This may allow natural turn taking multi-party conversations using verbal and nonverbal cues with the PCD 100 and may easily fit within the particle filtering framework. As a result, this may allow combining robust statistical estimation with human-centric heuristics in a principled fashion. Furthermore, the LPS 208 may learn prior probability distributions from repeated interaction and will adapt to the 'hot spots' in a space where people may emerge from hallways, doors, and around counters, and may use this spatial information to automatically target the most relevant locations for users. The low-level image and signal processing code may be customized and based on quality open source tools such as OpenCV, the integrating vision toolkit (IVT), Eigen for general numerical processing and processor-specific optimization libraries
[00106] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to recognize from a video stream various levels of emotions such as joy, anger, contempt, disgust, fear, sadness, confusion, frustration, and surprise. In an embodiment, the PCD 100 may be configured to determine head position, gender, age, and whether someone is wearing glasses, has facial hair, etc.
[00107] In accordance with exemplary and non-limiting embodiments, the audio input system is focused on the user. In some embodiments, the PCD 100 may be configured to update the direction of the audio beam-forming function in real time for example, depending on robot movement, kinematics and estimated 3D focus of attention directions. This may allow the PCD 100 to selectively listen to specific 'sectors' where there is a relevant and active audio input. This may increase the reliability of ASR and NLU functions through integration with full 3D person sensing and focus of attention.
[00108] Spatial Probability Learning
[00109] In accordance with exemplary and non-limiting embodiments, spatial probability learning techniques may be employed to help PCD 100 to engage more smoothly when users enter his presence. Over time, the PCD 100 may remember the sequences of arrival and joint presence of users and accumulate these statistics for a given room. This may give the PCD 100 an ability to predict engagement rules with the users on room entry and thereby, may enable the PCD 100 to turn a sector for a given time period and even guess the room occupants. For example, this feature may provide the PCD 100 an ability to use limited predictions to support interactions like "Hey, Billy is that you?" before the PCD 100 may have fully identified someone entering the room. The PCD 100 may be turning to the spatial direction most likely to result in seeing someone at that time of day at the same time.
[00110] Psycho-Social Interaction
[00111] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be a fully autonomous, artificial character. The PCD 100 may have emotions, may select his own goals (based on user input), and execute a closed loop real-time control system to achieve those goals to keep users happy and healthy. The psycho-social interaction module (PSIM) is a top layer of the closed loop, discrete time control system that may process outputs of the sensors and select actions for outputs and expressions. Various supporting processes may proceed concurrently on CPU, and sensory inputs may be delivered asynchronously to decision-making module. The "tick" is the decision cycle where the accumulated sensor information, current short-term memory /knowledge and task-driven, intentional state of the PCD 100 may be combined to select new actions and expressions.
[00112] FIG. 3A depicts architecture of the PSIM 300 in accordance with the exemplary and non-limiting embodiments. The core of the PSIM 300 is an executive 302 that orchestrates the operation of the other elements. The executive 302 is responsible for the periodic update of the brain of the PCD 100. Each "tick" of the PSIM 300 may include a set of processing steps that move towards issuing new commands to the psycho-social expression module in a following fashion
[00113] Internal Update:
a. Emotion Update
b. Goal Selection
[00114] Input Handling:
a. Asynchronous inputs from the psycho-social perception 304 are sampled and updated into the black board 306 of the decision module.
b. The input may include information such as person locations, facial ID samples, and parsed NLU utterances form various users.
c. Only new information that may need to be updated as the black board 306 may act like a cache.
d. In addition, information relevant to current Tasks may need to be captured.
[00115] Query Handling:
a. Results from any knowledge query operations are sampled into the blackboard 306 from the psycho-social knowledge base 308.
b. This may collect the results of deferred processing of query operations for use in current decisions.
[00116] Task Network 310: Think/Update
a. The executive 302 may run the "think" operation of the task network 310 and any necessary actions and decisions are made at each level. The set of active nodes in the task network 310 may be updated during this process. b. The task network 310 is a flexible form of state machine based logic that acts as a hierarchical controller for the robot's interaction.
[00117] Output Handling: a. Outputs loaded into specific blackboard 306 frames are transferred to the psycho-social expression module 312.
[00118] In accordance with exemplary and non-limiting embodiments, the executive 302 may also provide the important service of asynchronous dispatch of the tasks in the task network 310. Any task in the network 310 may be able to defer computation to concurrent background threads by requesting an asynchronous dispatch to perform any compute intensive work. This feature may allow the task network 310 to orchestrate heavyweight computation and things like slow or even blocking network I/O as actions without "blocking" the decision cycle or changing the reactivity of decision process of the PCD 100. In some embodiments, the executive 302 may dispatch planning operations that generate new sections of the task network 310 and they will be dynamically attached to the executing tree to extend operation through planning capabilities as the products intelligence matures. The task network 310 may be envisioned as a form of Concurrent Hierarchical Finite State Machine (CHFSM). However, the approach used by behavior tree designs has had great success in allowing human designers and software engineers to work together to create interactive experiences within a content pipeline. The task network design may enable clean, effective implementation and composition of tasks in a traditional programming language.
[00119] FIG. 4 illustrates a task network that shows a simplified version of a greeting interaction by the PCD 100. The architecture of the task network 310 enable various expressions, movements, sensing actions and speech to be integrated within the engine, and thereby giving designers complete control over interaction dynamics of the PCD 100. As illustrated, a tiny portion of the network is active at any time during the operation. The visual task network representation may be used to communicate in both a technical and design audience as part of content creation. In this example, the PIR sensor of the PCD 100 has detected a person entering the area. The PCD 100 is aware of the fact that the PCD 100 may need to greet someone and starts the "Greet User" sequence. This "Greet User" sequence may initialize tracking on motion cues and then say "Hello", while updating tracking for the user as they approach. The PCD 100 may keep updating the vision input to capture a face ID of the User. In this scenario, the ID says it's Jane so the PCD 100 moves on to the next part of the sequence where the PCD 100 may form an utterance to check in on how Jane is doing and opens his ASR NLU processing window to be ready for responses. Once Jane says something, a knowledge query may be used to classify the utterance into "Good" or "Bad" and the PCD 100 may form an appropriate physical and speech reaction for Jane to complete his greeting. The network may communicate the concept of how the intelligence works. [00120] Psycho-Social Expression
[00121] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to include an engine that may complement the sociable nature of the PCD 100. For example, the engine may include a tagging system for modifying the speech output. The engine may allow controlling the voice quality of the PCD 100. In an example, recordings may be done by a voice artist so as to control voice of the PCD 100. The engine may include features such as high quality compressed audio files for embedded devices and a straightforward pricing model. Further, the PCD 100 may include an animation engine for providing animations for physical joint rotations; graphics, shape, texture, and color; LED lighting, or mood coloring; timing; and any other expressive aspect of the PCD 100. These animations can be accompanied by other expressive outputs such as audio cues, speech, scent, etc. The animation engine may then play all or parts of that animation at different speeds, transitions, and between curves, while blending it with procedural animations in real-time. This engine may flexibly accommodate different PCD models, geometry, and degrees of freedom.
[00122] Dynamic targeting
[00123] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to employ an algorithm that may orient PCD 100 towards points in 3D space procedurally. The eyes of the PCD 100 may appear to be fixed on a single point while the body of the PCD 100 may be playing a separate animation, or the eye may lead while the body may follow to point in a particular direction. In an embodiment, a closed-form, geometric solver to compute PCD's look-at target may be used. This target pose is then fed into a multi- target blend system which may include support for acceleration constraints, additive blending/layering, and simulated VOR (vestibule- ocular reflex).
[00124] Simulation
[00125] In accordance with exemplary and non-limiting embodiments, the animation engine may include a simulator that may play and blend animations and procedural animations virtually. The simulator may simulate sensory input such as face detection. In some embodiments, a physical simulation into the virtual model may be built, considering the mass of the robot, the power of the motors, and the robot's current draw limits to validate and test
Figure imgf000022_0001
[00127] In accordance with exemplary and non-limiting embodiments, the graphical representation of the personal, e.g., the eye of the PCD 100, may be constructed using joints to allow it to morph and shape itself into different objects. An eye graphics engine may use custom animation files to morph the iris into different shapes, blink, change its color, and change the texture to allow a full range of expression.
[00128] Graphics
[00129] The PCD API may support the display of graphics, photos, animations, videos, and text in a 2D scene graph style interface.
[00130] Platform and Ecosystem
[00131] The PCD 100 is a platform, based on a highly integrated, high-performance embedded Linux system, coupled with an ecosystem of mobile device "companion" apps, a cloud-based back-end, and an online store with purchasable content and functionality.
[00132] PCD SDK
[00133] The PCD SDK may take advantage of Javascript and the open language of the modern web development community so as to provide an open and flexible platform on which third party developers can add capabilities with a low learning curve. All PCD apps, content and services created by the PCD SDK are available for download from the PCD App Store. All of PCD's functions, including TTS, sensory awareness, NLU, animations, and the others will be available through the PCD API. This API uses NodeJS, a JavaScript platform that is built on top of V8, Chrome's open source JavaScript engine. NodeJS uses an event driven model that is fast and efficient and translates well into robotics programming. NodeJS comes with a plethora of functionality out-of-the-box and is easily extensible as add-ons. PCD's API will be a NodeJS add-on. Because add-ons are also easily removed or modified, the ways may be controlled in which developers are able to interact with PCD. For example, developers may create an outbound socket, but also limit the number of outbound connections.
[00134] Cloud Architecture
[00135] In accordance with exemplary and non-limiting embodiments, a sophisticated cloud- based back end platform may be used to support PCD's intelligence, to retrieve fresh content and to enable people to stay connected with their family. The PCD device in the home may connect to PCD servers in the cloud via Wi-Fi. access to PCD cloud servers relies on highly secure and encrypted web communication protocols. Various applications may be developed for iOS, Android and HTML5 that may support PCD users, caregivers and family members on the go. With these mobile and web apps, the PCD 100 may always be with you, on a multitude of devices, providing assistance and all the while learning how to better support your preferences, needs and interests. Referring to FIG. 2, the PCD 100 may be configured to mirror in the cloud all the data that may make the PCD 100 unique to his family, so that users can easily upgrade to future PCD robot releases and preserve the persona and relationships they've established. For example, PCD's servers may be configured to collect data in the cloud storage 214 and compute metrics from the PCD robot and other connected devices to allow machine learning algorithms to improve the user models 216 and adapt the PCD persona model 218. Further, the collected data at the cloud storage 214 may be used to analyze what PCD features are resonating best with users, and to understand usage patterns across the PCD ecosystem, in order to continually improve the product offering.
[00136] In accordance with exemplary and non-limiting embodiments, a cloud-based back end platform may contain a data base system to be used for storage and distribution of data that is intended to be shared among a multitude of PCSs. The cloud-based back end platform may also host service applications to support the PCDs in the identification of people (for example Voice ID application) and the gathering of personal multi-modal data through interworking with social networks.
[00137] Cloud-based Server
[00138] In accordance with exemplary and non-limiting embodiments, the one or more PCD 100 may be configured to communicate with a cloud-based server back-end using RESTful- based web services using compressed JSON.
[00139] Security
[00140] In accordance with exemplary and non-limiting embodiments, a zero-configuration network protocol along with an OAUTH authentication model may be used to validate identity. Further, a security framework may be applied to provide additional security protocols around roles and permissions, such as the Shiro™ security framework offered by Apache™ among others. All sensitive data will be sent over SSL. On the server side, data using a strict firewall configuration employing OAUTH to obtain a content token may be secured. In addition, all calls to the cloud-based servers may be required to have a valid content token.
[00141] Content Delivery
[00142] In accordance with exemplary and non-limiting embodiments, a server API to include a web service call to get the latest content for a given PCD device is used. This web service may provide a high level call that returns a list of all the pending messages, alerts, updated lists (e.g., shopping, reminders, check-ins and the like) and other content in a concise, compact job manifest. The PCD robot may then retrieve the pending data represented in that manifest opportunistically based on its current agenda. In some embodiments, PCD's truth is in the cloud, meaning that the master record of lists, reminders, check-ins and other application state is stored on the PCD Servers. To ensure that the robot may have access to the latest content, the API may be called frequently and the content collected opportunistically (but in a timely manner).
[00143] Workflow Management
[00144] In accordance with exemplary and non-limiting embodiments, a functionality that is offloaded to the cloud and will not return results in real time may be used. This may tie in closely with the concept of the agenda-based message queuing discussed above. In addition, it may involve a server architecture that may allow requests for services to be made over the RESTful web service API and dispatch jobs to application servers. Amazon Simple Workflow (SWF) or similar workflow may be used to implement such a system along with traditional message queuing systems.
[00145] Updates
[00146] In accordance with exemplary and non-limiting embodiments, the content that may require updating may include the operating system kernel, the firmware, hardware drivers, V8 engine or companion apps of the PCD 100. Updates to this content may be available through a web service that returns information about the types of updates available and allows for the request of specific items. Since PCD will often need to be opportunistic to avoid disrupting a user activity the robot can request the updates when it can apply them. Rather than relying on the PCD robot to poll regularly for updates, the availability of certain types of updates may be pushed to the robot.
[00147] Logging/Metrics
[00148] In accordance with exemplary and non-limiting embodiments, the PCD 100 may send log information to the servers. The servers may store this data in the appropriate container (SQL or NoSQL). Tools such as Hadoop (Amazon MapReduce) and Splunk may be used to analyze data. Metrics may also be queryable so that the report may be run on how people interact with and use the PCD 100. The results of these analyses may be used to adjust parameters on how PCD learns, interacts, and behaves, and also on what features may be required in the future updates.
[00149] Machine Learning
[00150] In accordance with exemplary and non-limiting embodiments, various training systems and feedback loop may be developed to allow the PCD robot and cloud-based systems to continuously improve. The PCD robots may collect information that can be used to train machine learning algorithms. Some amount of machine learning may occur on the robot itself, but in the cloud, data may be aggregated from many sources to train classifiers. The cloud- based servers may allow for ground truth to be determined by sending some amount of data to human coders to disambiguate content with low probability of being heard, seen or understood correctly. Once new classifiers are created they may be sent out through the Update system discussed above. Machine learning and training of classifiers/predictors may span both supervised, unsupervised or reinforcement-learning methods and the more complex human coding of ground truth. Training signals may include knowledge that the PCD robot has accomplished a task or explicit feedback generated by the user such as voice, touch prompt, a smiling face, gesture, etc. Accumulating images from the cameras that may include a face and audio data may be used to improve the quality of those respective systems in the cloud.
[00151] Telepresence Support
[00152] In accordance with exemplary and non-limiting embodiments, a telepresence feature including a video chat option may be used. Further, a security model around the video chat to ensure the safety of users is enabled. In addition, a web app and also mobile device apps that utilize the roles, permissions and security infrastructure to protect the end users from unauthorized use of the video chat capabilities may be used.
[00153] Software Infrastructure
[00154] The high-level capabilities of PCD's software system are built on a robust and capable Embedded Linux platform that is customized with key libraries, board support, drivers and other dependencies to provide our high-level software systems with a clean, robust, reliable development environment. The top-level functional modules are realized as processes in our embedded Linux system. The module infrastructure of the PCD is specifically targeted at supporting flexible scripting of content, interactions and behavior in JavaScript while supporting computationally taxing operations in C++ and C basing on language libraries. It is built on the V8 JavaScript engine and the successful Node.js platform with key extensions and support packaged as C++ modules and libraries.
[00155] Hardware System Architecture
[00156] FIG. 5 A illustrates hardware architecture of the PCD 100 that may be engineered to support the sensory, motor, connectivity, power and computational needs of the one or more capabilities of the PCD 100. In some embodiments, one or more hardware elements of the PCD 100 are specializations and adaptations of core hardware that may have used in high-end tablets and other mobile devices. However, the physical realization and arrangement of shape, motion and sensors are unique to the PCD 100. An overall physical structure of the PCD 100 may also be referred herein to a 3 -ring Zetatype. Such type of physical structure of the PCD 100 may provide the PCD 100 a clean, controllable and attractive line of action. In an embodiment, the structure may be derived from the principles that may be used by character animators to communicate attention and emotion. The physical structure of the PCD 100 may define the boundaries of the mechanical and electrical architecture based on the three ring volumes, ranges of motion and necessary sensor placement.
[00157] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to include three-axes for movement, one or more stereo vision camera 504, a microphone array 506, touch sensing capabilities 508 and a display such as a LCD display 510. The three axes for movement may support emotive expression and the ability to direct sensors and attend users in a natural way. The stereo vision camera 504 may be configured to support 3D location and tracking of users, for providing video input, camera snaps and the like. The microphone array 506 may support beam-formed audio input to maximize ASR performance. The touch sensing capabilities 508 may enable an alternative interaction to make the PCD 100 like a friend, or as a form of user interface. The LCD display 510 may supports emotive expression as well as dynamic information display. Ambient LED lighting may also be included.
[00158] In accordance with exemplary and non-limiting embodiments, the hardware architecture 500 may be configured to include an electrical architecture that may be based on a COTS processor from the embedded control and robotics space and combined with high end application processor from the mobile devices and tablet space. The embedded controller is responsible for motion control and low-level sensor aggregation, while the majority of the software stack runs on the application processor. The electrical boards in the product are separated by function for VI design and this may provide a modularity to match the physical structure of the robot while mitigating the need for design changes on one board from propagating into larger design updates. In some embodiments, the electrical architecture may include a camera interface board that may integrate two mobile-industry based low-resolution MIPI camera modules that may support hardware synchronization so that capture images may be registered in time for the stereo system. The stereo cameras are designed to stream video in continuous mode. In addition, the camera interface board may support a single RGB application camera for taking high resolution photos and video conference video quality. The RGB application camera may be designed to use for specific photo taking, image snaps and video applications.
[00159] In accordance with exemplary and non-limiting embodiments, the hardware architecture may include a microphone interface board that may carry the microphone array 506, an audio processing and codec support 514 and sends a digital stream of audio to a main application processor 516. The audio output from our codec 514 may be routed out as speakers 518 are in a separate section of the body for sound isolation.
[00160] In accordance with exemplary and non-limiting embodiments, the hardware architecture may include a body control board 520 that may be integrated in a middle section of the body and provides motor control, low- level body sensing, power management and system wakeup functionality for the PCD 100. As an example, and not as a limitation, the body control board 520 may be built around an industry standard Cortex- M4F microcontroller platform. In addition, the architecture 500 may include an application processor board that may provide the core System On Chip (SoC) processor and tie together the remainder of the robot system. In an embodiment, the board may use a System On Module (SoM) to minimize the time and expense of developing early prototypes. In some embodiments, the application processor board may include the SoC processor for cost reduction and simplified production. The key interfaces of the application processor board may include interface for supporting MIPI cameras, the display, wireless communications and high performance audio.
[00161] In accordance with exemplary and non-limiting embodiments, the hardware architecture 500 may be configured to include power management board 522 that may address the power requirements of the PCD 100. The power management board 522 may include power regulators, battery charger and a battery. The power regulators may be configured to regulate the input power so that one or more elements or boards of the hardware architecture 500 may receive a regulated power supply. Further, the battery charger may be configured to charge the battery so as to enable the PCD 100 to operate for long hours. In an embodiment, the PCD 100 may have a charging dock/base/cradle, which will incorporate a wall plug and a blind mate charging connector such that the PCD 100, when placed on the base, shall be capable of charging the internal battery.
[00162] Mechanical Architecture
[00163] In accordance with exemplary and non-limiting embodiments, various features of the PCD 100 are provided to the user in a form of a single device. FIG. 6A illustrates an exemplary design of the PCD 100 that may be configured to include the required software and hardware architecture so as to provide various features to the users in a friendly manner. The mechanical architecture of the PCD 100 has been optimized for quiet grace and expressiveness, while targeting a cost effective bill of materials. By carefully selecting the best elements from a number of mature markets and bringing them together in a unique combination for the PCD 100, a unique device is produced. As illustrated in FIG. 6A, the mechanical architecture depicts placement of various boards such as microphone board, main board, battery board, body control board, camera board at an exemplary position within the PCD 100. In addition, one or more vents are provided in the design of the PCD 100 so as to appropriately allow air flow to provide cooling effect.
[00164] In accordance with various exemplary and non-limiting embodiments described below, PCD utilizes a plurality of sensors in communication with a processor to sense data. As described below, these sensors operate to acquire all manner of sensory input upon which the processor operates via a series of programmable algorithms to perform tasks. In fulfillment of these tasks, PCD 100 makes use of data stored in local memory forming a part of PCD 100 and accesses data stored remotely such as at a server or in the cloud such as via wired or wireless modes of communication. Likewise, PCD 100 makes use of various output devices, such as touch screens, speakers, tactile elements and the like to output information to a user while engaging in social interaction. Additional, non-limiting disclosure detailing the operation and interoperability of data, sensors, processors and modes of communication regarding a companion device may be found in published U.S. Application 2009/0055019 Al, the contents of which are incorporated herein by reference.
[00165] The embodiments described herein present novel and non-obvious embodiments of features and functionality to which such a companion device may be applied, particularly to achieve social interaction between a PCD 100 and a user. It is understood, as it is known to one skilled in the art, that various forms of sensor data and techniques may be used to assess and detect social cues from a physical environment. Such techniques include, but are not limited to, voice and speech recognition, eye movement tracking, visual detection of human posture, position, motion and the like. Though described in reference to such techniques, this disclosure is broadly drawn to encompass any and all methods of acquiring, processing and outputting data by a PCD 100 to achieve the features and embodiments described herein.
[00166] In accordance with exemplary and non-limiting embodiments, PCD 100 may be expressed in a purely physical embodiment, as a virtual presence, such as when executing on a mobile computational device like a mobile phone, PDA, watch, etc., or may be expressed as a mixed mode physical/virtual robot. In some embodiments, the source information for driving a mixed mode, physical, or virtual PCD may be derived as if it is all the same embodiment. For example, source information as might be entered via a GUI interface and stored in a database may drive a mechanical PCD as well as the animation component of a display forming a part of a virtual PCD. In some embodiments, source information comprises a variety of sources, including, outputs from AI systems, outputs from real-time sensing; source animation software models; kinematic information models, and the like. In some embodiments, data may be pushed from a single source regarding behavior of a purely virtual character (at the source) and then can output the physical as well as the virtual modes for a physical PCD . In this manner, embodiments of a PCD may span the gamut from purely physical to entirely virtual to a mixed mode involving some of both. PCD 100 possesses and is expressed as a core persona that may be stored in the cloud, and that can allow what a user does with the physical device to be remembered and persist, so that the virtual persona can remember and react to what is happening with the physical device, and vice versa. One can manage the physical and virtual instances via the cloud, such as to transfer from one to the other when appropriate, have a dual experience, or the like.
[00167] As illustrated, PCD 100 incorporates a generally tripartite design comprising three distinct body segments separated by a generally circular ring. By rotating each body segment about a ring, such as via internal motors (not shown), PCD 100 is configured to alter its shape to achieve various form factors as well as track users and other objects with sensors 102, 104, 106, 108, 112. In various embodiments, attributes of PCD 100 may be statically or dynamically configured including, but not limited to, a shape of touch screen 104, expressive body movement, specific expressive sounds and mnemonics, specific quality of prosody and vocal quality when speaking, the specifics of the digital interface, the "faces" of PCD 100, a full spectrum LED lighting element, and the like.
[00168] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to employ multi-modal user interface wherein many inputs and outputs may be active simultaneously. Such type of concurrent interface may provide a robust user experience . In some embodiments, one or more of the user interface inputs or outputs might be compromised depending upon the environment resulting in a relatively lesser optimal operation of the PCD 100. Operating the various modes simultaneously may help fail-safe the user experience and interaction with the device to guarantee no loss of communication.
[00169] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to process one or more inputs so as to provide enriching experience to the user of the PCD 100. The PCD 100 may be configured to recognize speech of the user. For example, the PCD 100 identify a "wake up word" and/or other mechanism from the speech so as to reduce "false positive" engagements. In some embodiments, the PCD 100 may be configured to recognize speech in a near-field range of N x M feet, where N and M may be determined by the sound quality of speech and detection sensitivity of the PCD. In other embodiments, the PCD 100 may be configured to recognize speech with a far-field range in excess of N feet covering at least the area of 12 feet by 15 feet room size. In some embodiments, PCD 100 may be configured to identify sounds other than spoken language. The PCD may employ a sound signature database configured with sounds that the PCD can recognize and act upon. The PCD may share the content of this database with other PCD devices via direct or cloud based communications. As an example, and not as a limitation, the sounds other than the spoken language may comprise sounds corresponding to breaking glass, door bell, phone ringing, a person falling down, sirens, gun shots, audible alarms, and the like. Further, the PCD 100 may be configured to "learn" new sounds by asking a user to identify the source of sounds that do not match existing classifiers of the PCD 100. The device may be able to respond to multiple languages. In some embodiments, the PCD 100 may be configured to respond to the user outside of the near-field range with the wake-up word. The user may be required to get into the device's field of vision.
[00170] In some embodiments, the PCD 100 may have touch sensitive areas on its surface that may be used when the speech input is compromised for any reason. Using these touch inputs, the PCD 100 may ask yes/no questions or display options on the screen and may consider user's touch on the screen as inputs from the user. In some embodiments, the PCD 100 may use vision and movement to differentiate one user from another, especially when two or more users are within the field of vision. Further, the PCD 100 may be capable of interpreting gross skeletal posture and movement, as well as some common gestures, within the near-field range. These gestures may be more oriented toward social interaction than device control. In some embodiments, the PCD 100 may be configured to include cameras so as to take photos and movies. In an embodiment, the camera may be configured to take photos and movies when the user is within a predetermined range of the camera. In addition, the PCD 100 may be configured to support video conferencing (pop-ins). Further, the PCD 100 may be configured to include a mode to eliminate "red eye" when the camera is in photo mode.
[00171] In some embodiments, the PCD 100 may be configured to determine if it is being picked up, carried, falling, and the like. In addition, the PCD 100 may be configured to implement a magnetometer. In some embodiments, the PCD 100 may determine ambient lighting levels. In addition, the PCD 100 may adjust the display and accent lighting brightness levels to an appropriate level based on ambient light level. In some embodiments, the PCD 100 may have the ability to use GPS to approximate the location of a device. The PCD 100 may determine relative location within a residence. In some embodiments, the PCD 100 may be configured to include one or more passive IR motion detection sensors (PIR) to aid in gross or far field motion detection. In some embodiments, the PCD 100 may include at least one thermistor to indicate ambient temperature of the environment. [00172] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to speak "one voice" English to a user in an intelligible, natural voice. The PCD 100 may be configured to change the tone of the spoken voice to emulate the animated device emotional state (sound sad when PCD 100 is sad, etc.). In some embodiments, the PCD 100 may be configured to include at least one speaker capable of playing speech, high fidelity music and sound effects. In an embodiment, the PCD 100 may have multiple speakers, one for speech, one for music, and/or additional speakers for special audible signals and alarms. The speaker dedicated for speech may be positioned towards the user and tuned for voice frequency response. The speaker dedicated to music may be tuned for full frequency response. The PCD 100 may be configured to have a true color, full frame rate display. In some embodiments, the displayed active image may be (masked) round at least 4-½" in diameter. In some embodiments, the PCD 100 may have a minimum of 3 degrees of freedom of movement, allowing for both 360-degree sensor coverage of the environment and a range of humanlike postures and movements (expressive line of action). The PCD 100 may be configured to synchronize the physical animation to the sound, speech, accent lighting, and display graphics. This synchronization may be close enough as to be seamless to human perception. In some embodiments, the PCD 100 may have designated areas that may use accent lighting for both ambient notification and social interaction. Depending on the device form, the accent lighting may help illuminating the subject in a photo when the camera of the PCD 100 is in photo or movie capture mode. In some embodiments, the PCD 100 may have camera flash that will automatically illuminate the subject in a photo when the camera is in photo capture mode. Further, it may be better for the accent lighting to accomplish the illumination of the subject. In addition, the PCD 100 may have a mode to eliminate "red eye" when the camera is in photo capture mode.
[00173] In accordance with exemplary and non-limiting embodiments, the PCD 100 may identify and track the user. In an embodiment, the PCD 100 may be able to notice when a person has entered a near-field range. For example, the near-field range may be of 10 feet. In another embodiment, the PCD 100 may be able to notice when a person has entered a far-field range. For example, the far-field range may be of 10 feet. In some embodiments, the PCD 100 may identify up to 5 different users with a combination of video (face recognition), depth camera (skeleton feature matching), and sound (voice ID). In an embodiment, a "learning" routine is used by the PCD 100 to learn the users that the PCD 100 will be able to recognize. In some embodiments, the PCD 100 may locate and track users in a full 360 degrees within a near-field range with a combination of video, depth camera, and auditory scene analysis. In some embodiments, the PCD 100 may locate and track users in a full 360 degrees within a far- field range of 10 feet. In some embodiments, the PCD 100 may maintain an internal map of the locations of different users relative to itself whenever users are within the near-field range. In some embodiments, the PCD 100 may degrade functionality level as the user gets further from the PCD 100. In an embodiment, a full functionality of the PCD 100 may be available to users within the near-field range of the PCD 100. In some embodiments, the PCD 100 may be configured to track mood and response of the users. In an embodiment, the PCD 100 may determine the mood of a user or group of users through a combination of video analysis, skeleton tracking, speech prosody, user vocabulary, and verbal interrogation (i.e., device asks "how are you?" and interprets the response).
[00174] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be programmed with human social code to blend emotive content into its animations. In particular, programmatic intelligence should be applied to the PCD 100 to adjust the emotive content of the outputs appropriately in a completely autonomous fashion, based on perceived emotive content of user expression. The PCD 100 may be programmed to attempt to improve the sensed mood of the user through a combination of speech, lighting, movement, and sound effects. Further, the PCD social code may provide for the ability to build rapport with the user. i.e. mirror behavior, mimic head poses, etc.
[00175] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be programmed to deliver proactively customized Internet content comprising sports news and games, weather reports, news clips, information about current events, etc., to a user in a social, engaging method based on learned user preferences and/or to develop its own preferences for sharing that information and data as a way of broadening the user's potential interests.
[00176] The PCD device may be programmed with the capability of tailoring both the type of content and the way in which it is communicated to each individual user that it recognizes.
[00177] The PCD device may be programmed with the capability of improving and optimizing the customization of content/delivery to individual users over time based on user preferences and user reaction to and processing habits of the delivered Internet content.
[00178] The PCD may be programmed to engage in a social dialogue with the user to confirm that the delivered information was understood by the user.
[00179] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to manage and monitor activities of the user. In some embodiments, the communication devices 122 in conjunction with the service, may, at the user's request, create and store to-do, grocery, or other lists that can be communicated to the user once they have left for the shopping trip. In some embodiments, the PCD 100 may push the list to the user (via the service) to a mobile phone as a text (SMS) message, or pulled by a user of either our mobile or web app, upon request. In some embodiments, the user may make such a request via voice on the PCD 100, or via the mobile or web app through the service. The PCD 100 may interact with user to manage lists (i.e., removing items that were purchased/done/no longer needed, making suggestions for additional list items based on user history, etc.). The PCD 100 may infer the need to add to a list by hearing and understanding key phrases in ambient conversation (i.e., device hears "we are out of coffee" and asks the user if they would like coffee added to the grocery list).
[00180] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to provide user-generated reminders or messages at correct times. The PCD 100 may be used for setting up conditions for delivering reminders at the correct times. In an embodiment, the conditions for reminders may include real time conditions such as "the first time you see me tomorrow morning", or "the next time my daughter is here", or even "the first time you see me after noon next Tuesday" and the like. Once a condition set is met, the PCD 100 may engage the user (from a "look-at" as well as a body language/expression perspective) and deliver the reminder in an appropriate voice and character. In some embodiments, the PCD 100 may analyze mood content of a reminder and use this information to influence the animation/lighting/delivery of that reminder. In other embodiments, the PCD 100 may follow up with the user after the PCD 100 has delivered a reminder by asking the user if they performed the reminded action.
[00181] In accordance with exemplary and non-limiting embodiments, the PCD 100 may monitor absence of the user upon a request that may be given by the user. For example, the user may tell the PCD 100 when and why they are stepping away (e.g., "I'm going for a walk now"), and the expected duration of the activity so that the PCD 100 may ensure that the user has returned within a desired/requested timeframe. Further, the PCD 100 may notify emergency contacts as have been specified by the user for this eventuality, if the user has not returned within the specified window. The PCD 100 may notify the emergency contacts through text message and/or through a mobile app. The PCD 100 may recognize the presence and following up on the activity (i.e., asking how the activity was, or other questions relevant to the activity) when the user has returned. Such type of interaction may enable a social interaction between the PCD 100 and the user, and also enable collection of information about the user for the learning database. The PCD 100 may show check-out/check-in times and current user status to such family/friends as have been identified by the user for this purpose. This may be achieved through a mobile app. The PCD 100 may be capable of more in-depth activity monitoring/patterning/reporting .
[00182] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be configured to connect to external networks through one or more data connections. In some embodiments, PCD 100 may have access to a robust, high bandwidth wireless data connection such as WiFi Data Connection. In an embodiment, the PCD 100 may implement 802.1 In WiFi specification with a 2x2 two stream MIMO configuration in both 2.4GHZ and 5GHz bands. In some embodiments, the PCD 100 may connect to other Bluetooth devices (medical sensors, audio speakers, etc.). In an embodiment, the PCD 100 may implement Bluetooth 4.0 LE (BLE) specification. The BLE enabled PCD 100 device may be configured to customize its UUID to include and share multi -modal user data with other BLE enabled PCD 100 devices. In some embodiments, the PCD 100 may have connectivity to3G/4G/LTE or other cellular networks.
[00183] In accordance with exemplary and non-limiting embodiments, a multitude of PCD 100 devices may be configured in a meshed network configuration using ad-hoc networking techniques to allow for direct data sharing and communications without the need for a cloud based service. Alternatively, data to be shared among multiple PCD 100 devices may be uploaded and stored in a cloud based data base / data center where it may be processed and prepared for broadcasting to a multitude of PCD 100 devices. A cloud based data service may be combined with a meshed network arrangement to provide for both local and central data storage, sharing, and distribution for a multitude of PCD 100 devices in a multitude of locations.
[00184] In accordance with exemplary and non-limiting embodiments, a companion application may be configured to connect with the PCD 100. In some embodiments, the companion application may be available on the following platforms: iOS, Android, and Web. The companion application may include an intuitive and easy to use user interface (UI) that may not require more than three interactions to access a feature or function. The companion application may provide user an access to a virtual counterpart of the PCD 100 so that the user may access this virtual counterpart to interact with the real PCD 100.
[00185] In some embodiments, the user may be able to access information such as shopping lists, activity logs of the PCD 100 through the companion application. Further, the companion application may present the user with longitudinal reports of user activity local to the PCD 100. In some embodiments, the companion application may connect the user via video and audio to the PCD 100. In addition, the companion application may asynchronously alert the user to certain conditions (e.g., a local user is later than expected by a Check-In, there was a loud noise and local user is unresponsive, etc.).
[00186] In some embodiments, an administration/deployment application to allow connectivity or control over a family of devices may be available on a web platform. An UI of the administration application may enable hospital/caregiver administrators or purchasers who may need quick access to detailed reports, set-up, deployment, and/or support capabilities. Further, a group may be able to access information stored across a managed set of PCD 100 devices using the administration application. The administration application may asynchronously alert an administrator to certain conditions (e.g., local user is later than expected by a Check-In, there was a loud noise and local user is unresponsive, etc.). In addition, the administration application may broadcast messages and reminders across a subset or all of its managed devices.
[00187] In accordance with exemplary and non-limiting embodiments, a support console may allow personnel of the PCD 100 to monitor/support/diagnose/deploy one or more devices. The support console may be available on web platform. In an embodiment, the support console may support a list view of all deployed PCD devices that may be identified by a unique serial number, owner, institutional deployment set, firmware and application version numbers, or registered exception. In an embodiment, the support console may support interactive queries, with tags including serial number, owner, institutional deployment set, firmware and application version numbers, or registered exception. Further, the support console may support the invocation and reporting of device diagnostics.
[00188] In accordance with exemplary and non-limiting embodiments, the support console may assist in the deployment of new firmware and software versions (push model). Further, the support console may assist in the deployment of newer NLUs, new apps, etc. The support console may support customer support scenarios, broadcasting of messages to a subset or all deployed devices to communicate things like planned downtime of the service, etc. In some embodiments, the support console may need to support access to a variety of on-device metrics, including (but not exclusive to): time spent interacting with the PCD 100, time breakdown across all the apps/services, aggregated hit/miss metrics for audio and video perception algorithms, logged actions (to support data mining, etc.), logged exceptions, alert thresholds (e.g. at what exception level should the support console scream at you?), and others.
[00189] In accordance with exemplary and non-limiting embodiments, PCD 100 may engage in teleconferencing. In some embodiments, teleconferencing may commence to be executed via a simple UI, either with touch of the body of PCD 100 or touch screen 104 or via voice activation such as may be initiated with a number of phrases, sounds and the like. In one embodiment, there is required no more than two touches of PCD 100 to initiate teleconferencing. In some embodiments, calls may also be initiated as an output of a Call Scheduling/Prompting feature. Once initiated, PCD 100 may function as a phone using microphone 112 and speaker 110 to receive and output audio data from a user while using a Wi-Fi connection, Bluetooth, a telephony connection or some combination thereof to affect phone functionality.
[00190] Calls may be either standard voice calls or contain video components. During such interactions, PCD 100 may function as a cameraman for the PCD 100 end of the conversation. In some embodiments, PCD 100 may be placed in the middle of a table or other social gathering point with a plurality of users, such as a family, occupying the room around PCD 1000, all of whom may be up, moving, and active during the call. During the call, PCD 100 may point a camera 106 in a desired place. In one embodiment, PCD 100 may utilize sound localization and face tracking to keep camera 106 pointed at the speaker/user. In other embodiments, PCD 100 may be directed (e.g., "PCD, look at Ruby") by people/users in the room. In other embodiments, a remote person may be able to specify a target to be tracked via a device, and the PCD 100 will autonomously look at and track that target. In either scenario, what camera 106 receives as input is presented to the remote participant if, for example, they are using a smart phone, laptop, or another device capable of displaying video.
[00191] The device may be able to understand and respond in multiple languages. During such an interaction, PCD 100 may also function as the "interpreter" for the person on the other end of the link, much like the paradigm of a United Nations interpreter, by receiving voice input, translating the input via a processor, and outputting the translated output. If there is a screen available in the room with PCD 100, such as a TV, iPad, and the like, PCD 100 may send, such as via Bluetooth or Wi-Fi, audio and, if available, video of the remote participant to be displayed on this TV screen. If there is no other screen available, PCD 100 may relay the audio from the remote participant, but no remote video may be available. In such an instance, PCD 100 is merely relaying the words of the remote participant. In some embodiments, PCD 100 may be animated and reactive to a user, such as by, for example, blinking and looking down if the remote participant pauses for a determined amount of time, or doing a little dance or "shimmy" if PCD 100 senses that the remote participant is very excited.
[00192] In another embodiment, PCD 100 may be an avatar of the person on the remote end of the link. For example, an eye or other area displayed on touch screen 104 may morph to a rendered version (either cartoon, image based or video stream, among other embodiments) of the remote participant's face. The rendering may be stored and accessible to PCD 100. In other embodiments, PCD 100 may also retrieve data associated with and describing a remote user and imitate motions/non-verbal cues of remote user to enhance the avatar experience.
[00193] In some embodiments, during the call, either remote or local participants can cue the storage of still images, video, and audio clips of the participants and PCDs 100 camera view, or notes (e.g., "PCD, remember this number"). These tagged items will be appropriately meta- tagged and stored in a PCD cloud.
[00194] In accordance with other embodiments, PCD 100 may also help stimulate remote interaction upon request. For example, a user may ask PCD 100 to suggest a game, which will initiate Connected Gaming mode, described more fully below, and suggest games until both participants agree. In another example, a user may also ask PCD 100 for something to talk about. In response, PCD 100 may access "PCD In The Know" database targeted at common interests of the conversation participants, or mine a PCD Calendar for the participants for an event to suggest that they talk about (e.g., "Grandma, tell Ruby about the lunch you had with your friend the other day").
[00195] Scheduling Assistant
[00196] In accordance with exemplary and non-limiting embodiments, PCD 100 may suggest calls based on calendar availability, special days, and/or knowledge of presence at another end of the link (e.g., "your mom is home right now, and it's her birthday, would you like to call her?"). The user may accept the suggestion, in which case a PCD Call app is launched between PCD 100 and the remote participant's PCD 100, phone, smart device, or Skype account. A user may also accept the suggestion by asking PCD 100 to schedule the call later, in which case a scheduling app adds it to the user's calendar.
[00197] Call Answering and Messaging
[00198] In accordance with exemplary and non-limiting embodiments, a call answering and messaging functionality may be implemented with PCD 100. This feature applies to voice or video calls placed to PCD 100 and PCD 100 will not perform call management services for other cellular connected devices. With reference to FIG. 7, there is illustrated a flowchart 700 of an exemplary and non-limiting embodiment. As illustrated, at step 702, when a call is placed to PCD 100, PCD 100 may announce the caller to the people in the room. If no one is in the room, PCD 100 may check the user's calendar and, if it indicates that they are not at home, PCD 100 may send the call directly to a voicemail associated with PCD 100, at step 704. If, conversely, it indicates they are at home, PCD 100 will, at step 706, use louder sounds (bells, rings, shouts?) to get the attention of a person in the house. [00199] Once PCD 100 has his user's attention, at step 708, PCD 100 may announce the caller and ask if they would like to take the call. At step 710, a user may respond with a simple touch interface or, ideally, with a natural language interface. If the answer is yes, at step 712, PCD 100 connects the call as described in the Synchronous On-Demand Multimodal Messaging feature. If the answer is no, at step 714, the call is sent to PCD 100 voicemail.
[00200] If a caller is directed to voicemail, PCD 100 may greet them and ask them to leave a message. In some embodiments, a voice or voice/video (if caller is using Skype or equivalent) message may be recorded for playback at a later date.
[00201] Once the user returns and PCD 100 detects them in the room again, PCD 100 may, at step 716, inform them of the message (either verbally with "you have a message", or nonverbally with lighted pompom, etc.) and ask them if they would like to hear it. If yes, PCD 100 may either play back audio or play audio/video message on a TV/tablet/etc. as described above.
[00202] The user may have the option of saving the message for later. He can either tell PCD 100 to ask again at a specific time, or just "later", in which case PCD 100 will ask again after a predetermined amount of time.
[00203] If the caller is unknown to PCD 100, PCD 100 may direct the call to voicemail and notify the user that an unidentified call from X number was received, and play back the message if one was recorded. The user may then instruct PCD 100 to effectively block that number from connection/voicemail going forward. PCD 100 may also ask if the user wishes to return the call either synchronously or asynchronously. If user accepts, then PCD 100 launches appropriate messaging mode to complete user request. In some embodiments, PCD 100 may also provide Call Manager functionality for other cellular or landline devices in the home. In yet other embodiments, PCD 100 may answer the call and conversationally prompt the caller to leave a message thus playing role of personal assistant.
[00204] Connected Story Reading
[00205] In accordance with exemplary and non-limiting embodiments, PCD 100 may incorporate a Connected Story Reading app to enable a remote participant to read a story "through" PCD 100 to a local participant in the room with PCD 100. The reader may interact through a simple web or Android app based interface guided by a virtual PCD 100 through the process of picking a story and reading it. The reader may read the words of the story as prompted by virtual PCD 100. In some embodiments, the reader's voice will be played back by the physical PCD 100 to the listener, with preset filters applied to the reader's voice so that the reader can "do the voices" of the characters in an incredibly compelling way even if he/she has no inherent ability to do this. Sound track and effects can also be inserted into the playback. The reader's interface may also show the "PCD's Eye View" video feed of the listener, and PCD 100 may use it's "Cameraman" ability to keep the listener in the video.
[00206] Physical PCD 100 may also react to the story with short animations at appropriate times (shivers of fear, etc.), and PCD's 100 eye, described above, may morph into different shapes in support of story elements. This functionality may be wrapped inside a PCD Call feature such that the reader and the listener can interrupt the story with conversation about it, etc. The app may recognize that the reader has stopped reading the story, and pause the feature so the reader and listener can converse unfiltered. Alternatively, the teller could prerecord the story and schedule it to be played back later using the Story Relay app described below.
[00207] Hotline
[00208] In accordance with exemplary and non-limiting embodiments, a user may utilize PCD 100 to communicate with "in-network" members via a "push to talk" or "walkie-talkie" style interface. This feature may be accessed via a single touch on the skin or a screen icon on PCD 100, or via a simple voice command "PCD 100, talk to Mom". In some embodiments, this feature is limited to only PCD -to-PCD conversation, and may only be useable if both PCDs 100 detect a user presence on their end of the link.
[00209] Story Relay
[00210] With reference to FIG. 8, there is illustrated a flowchart 800 of an exemplary and non-limiting embodiment. As illustrated, at step 802, a user/story teller may record a story at any time for PCD 100 to replay later. Stories can be recorded in several ways:
[00211] By PCD 100: the storyteller tells their story to a PCD 100, who records it for playback
[00212] By Virtual PCD 100 web interface or Android app: the user is guided by virtual PCD 100 to tell their story to a webcam. They also have the opportunity to incorporate more rich animations/sound effects/background music in these types of stories.
[00213] Once a story has been recorded, PCD 100 may replay the story according to the scheduling preferences set by the teller, at step 804. The listener will be given the option to hear the story at the scheduled time, and can accept, decline, or reschedule the story.
[00214] In an embodiment, during the storytelling, PCD 100 may take still photos of the listener at a predetermined rate. Once the story is complete, PCD 100 may ask listener if he/she would like to send a message back to the storyteller, at step 806. If the user accepts, then at step 808, PCD 100 may enter the "Asynchronous Multimodal Messaging" feature and compile and send the message either to the teller's physical PCD 100 if they have one, or via virtual PCD 100 web link. The listener may have opportunity to incorporate a photo of him/herself listening to the story in the return message.
[00215] Photo/Memory Maker
[00216] In accordance with exemplary and non-limiting embodiments. PCD 100 may incorporate a photo/memory maker feature whereby PCD 100 takes over the role of photographer for an event. There are two modes for this:
[00217] PCD Snap Mode
[00218] In this mode, the users who wish to be in the picture may stand together and say "PCD, take a picture of us". PCD 100 acknowledges, then uses verbal cues to center the person/s in the camera image, using cues like "back up", "move left", etc. When they are properly positioned PCD 100 tells them to hold still, then uses some sort of phrase to elicit a smile ("cheese", etc.). PCD 100 may use facial expression recognition to tell if they are not smiling and continue to attempt to elicit a smile. When all users in the image are smiling, PCD 100 may take several pictures, using auto-focus and flash if necessary.
[00219] Event Photographer Mode
[00220] In this mode, a user may instruct PCD 100 to take pictures of an event for a predetermined amount of time, starting at a particular time (or "now", if desired). PCD 100 uses a combination of sound location and face recognition to look around the room and take candid pictures of the people in the room at a user defined rate. All photos generated may be stored locally in PCD 100 memory.
[00221] Once photos are generated, PCD 100 may inform a user that photos have been uploaded to the PCD 100 cloud. At that point, they can be accessed via the PCD 100 app or web interface, where a virtual PCD 100 may guide the user through the process of deleting, editing, cropping, etc. photos. They will then be emailed to the user or posted to Facebook, etc. In this "out of the box" version of this app, photos might only be kept on the PCD 100 cloud for a predetermined amount of time with permanent storage with filing/metatagging offered at a monthly fee as part of, for example, a "living legacy" app described below.
[00222] As described herein, PCD 100 may thus operate to aid in enhancing interpersonal and social occasions. In one embodiment, an application, or "app", may be configured or installed upon PCD 100 to access and operate one or more interface components of PCD 100 to achieve a social activity. For example, PCD 100 may include a factory installed app that, when executed, operates to interact with a user to receive one or more parameters in accordance with which PCD 100 proceeds to take and store one or more photos. For example, a user may say to PCD 100, "Please take at least one picture of every separate individual at this party." In response, PCD 100 may assemble a list of party guests from an accessible guest list and proceed to take photos of each guest. In one embodiment, PCD 100 may remain stationary and query individuals as they pass by for their identity, record the instance, and take a photo of the individual. In another embodiment, PCD 100 may interact with guests and ask them to set PCD 100 in front of groupings of guests in order to take their photos. Over a period of time, such as the duration of the party, PCD 100 acquires one more photos of party guests in accordance with the user's wishes in fulfillment of the social goal/activity comprising documenting the social event.
[00223] In accordance with other exemplary embodiments, PCD 100 may read and react to social cues. For example, PCD 100 may observe a user indicate to another person the need to speak more softly. In response, PCD 100 may lower the volume at which it outputs verbal communications. Similarly, PCD 100 may emit sounds indicative of satisfaction when hugged or stroked. In other embodiments, PCD 100 may emit or otherwise output social cues. For example, PCD 100, sensing that a user is running late for an appointment, may rock back and forth in a seemingly nervous state in order to hasten the rate of the user's departure.
[00224] Interactive Calendar
[00225] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a calendar system to capture the business of a user and family outside of work. PCDs 100 may be able to share and integrate calendars with those of other PCD 100s if their users give permission, so that an entire extended family with a PCD 100 in every household would be able to have a single unified calendar for everyone.
[00226] Items in PCD 100s calendar may be metatagged with appropriate information, initially the name of the family member(s) that the appointment is for, how they feel about the appointment/event, date or day-specific info (holidays, etc.) and the like. Types of events that may be entered include, but are not limited to, wake up times, meal times, appointments, reminders, phone calls, household tasks/yardwork, etc. Note that not all events have to be set to a specific time - events may be scheduled predicated on sensor inputs, etc., for instance "remind me the first time you see me tomorrow morning to pack my umbrella".
[00227] Entry of items into PCD's 100 calendar may be accomplished in a number of ways. One embodiment utilizes an Android app or web interface, where virtual PCD 100 guides the user through the process. It is at this point that emoticons or other interface can be used to tell PCD 100 how a user is feeling about apt/event. Graphical depiction of a calendar in this mode may be similar to Outlook, allowing a user to see the events/appts of other network members. PCD 100 Calendar may also have a feature for appointment de-confliction similar to what Outlook does in this regard.
[00228] In some embodiments, users may also be able to add items to the calendar through a natural language interface ("PCD, I have a dentist appointment on Tuesday at 1PM, remind me half an hour earlier", or "PCD, dinner is at 5:30PM tonight"). User feeling, if not communicated by a user, may be inquired afterward by PCD 100 (e.g., "How do you feel about that appointment?"), allowing appropriate emotional metatagging.
[00229] Once an event reminder is tripped, PCD 100 may pass along the reminder in one of two ways. If the user for whom the reminder was set is present in PCD 100's environment, he will pass along the reminder in person, complete with verbal reminder, animation, facial expressions, etc. Emotional content of facial expression may be derived from metatagging of an event such as through emoticon or user verbal inputs. His behaviors can also be derived from known context (for instance, he's always sleepy when waking up or always hungry at mealtimes). Expressions that are contextually appropriate to different events can be refreshed by authoring content periodically to keep it non-repetitive and entertaining.
[00230] If the user for whom the reminder is occurring is NOT physically present with PCD 100, PCD 100 can call out for them. In such an instance, if they are non-responsive to this, PCD 100 may text their phone with the reminder.
[00231] List Manager
[00232] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a List Manager feature. In accordance with this feature, PCD 100 may, at the user's request, create to-do lists or shopping lists that can be texted to the user once they have left for the shopping trip. The feature may be initiated by the user via a simple touch interface, or ideally, through a natural language interface. A user may specify the type of list to be made (e.g., "grocery", "clothes", "to-do", or a specific type of store or store name). PCD 100 may ask what is initially on the list, and the user may respond via spoken word to have PCD 100 add things to the list. At any later time, user may ask PCD 100 to add other items to the list.
[00233] In accordance with some embodiments, PCD 100 may be able to parse everyday conversation to determine that an item should be added to the list. For example, if someone in the room says "we're out of milk", PCD 100 might automatically add that to the grocery list.
[00234] When the user is leaving for a trip to a store for which PCD 100 has maintained a list, the user may request PCD 100 to text the appropriate list to them, so that it will be available to them when they are shopping in the store. Additionally, if the user is away from PCD 100 but near a store, they may request the list to be sent through the Android or web app. [00235] Upon their return (i.e., the next time PCD 100 sees that user after they have requested the list to be texted to them), PCD 100 may ask how the trip went/whether the user found everything on the list. If "yes", PCD 100 will clear the list and wait for other items to be added to it. If "no", PCD 100 will inquire about what was not purchased, and clear all other items from the list.
[00236] In the case of to-do lists, a user may tell PCD 100 "I did X", and that item may be removed from the stored list.
[00237] Users might also request to have someone else's PCD-generated list texted to them (pending appropriate permissions). For example, if an adult had given a PCD 100 to an elder parent, that adult could ask PCD 100 to send them the shopping list generated by their parent's PCD 100, so that they could get their parents groceries while they were shopping for their own, or they could ask PCD 100 for Mom's "to-do" list prior to a visit to make sure they had any necessary tools, etc.
[00238] PCD in the Know
[00239] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with an "In the Know" feature. In accordance with this feature, PCD 100 may keep a user up to date on the news, weather, sports, etc. in which a user is interested. This feature may be accessed upon request using a simple touch interface, or, ideally, a natural language command (e.g., "PCD 100, tell me the baseball scores from last night").
[00240] The user may have the ability to set up "information sessions" at certain times of day. This may be done through a web or mobile app interface. Using this feature, PCD 100 may be scheduled to relay certain information at certain times of day. For instance, a user might program their PCD 100 to offer news after the user is awake. If the user says "yes", PCD 100 may deliver the information that the user has requested in his/her "morning briefing". This may include certain team scores/news, the weather, review of headlines from major paper, etc. PCD 100 may start with an overview of these items and at any point the user may ask to know more about a particular item, and PCD 100 will read the whole news item.
[00241] News items may be "PCD-ized". Specifically, PCD 100 may provide commentary and reaction to the news PCD 100 is reading. Such reaction may be contextually relevant as a result of AI generation.
[00242] Mood. Activity. Environment Monitor
[00243] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a mood, activity, and environment monitor feature in the form of an application for PCD 100. This application may be purchased by a person who had already purchased PCD 100, such as for an elder parent. Upon purchase, a web interface or an Android app interface may be used to access the monitoring setup and status. A virtual PCD 100 may guide the user through this process. Some examples of things that can be monitored include (1) Ambient temperature in the room/house where PCD 100 is, (2) Activity (# of times a person walked by per hour/day, # of hours without seeing a person, etc.), (3) a mood of person/s in room: expressed as one of a finite set of choices, based upon feedback from sensors (facial expressions, laughter frequency, frequency of use of certain words/phrases, etc.) and (4) PCD 100 may monitor compliance to a medication regimen, either through asking if medication had been taken, or explicitly watching the medication be taken.
[00244] The status of the monitors that may have been set can be checked via the app or web interface, or in the case of an alert level being exceeded (e.g., it is too cold in the house, no one has walked by in a threshold amount of time), then a text could be sent by PCD 100 to a monitoring user. In addition, PCD 100 may autonomously remind the user if certain conditions set by the monitoring user via the app or web interface are met such as, for example, shivering and asking the heat to be turned up if it is too cold.
[00245] Mood Ring
[00246] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a Mood Ring feature. The mood ring feature may make use of PCD's 100 sensors to serve as an indicator and even an influencer of the mood/emotional state of the user. This feature may maintain a real time log of the user's emotional state. This indicator may be based on a fusion of facial expression recognition, body temperature, eye movement, activity level and type, speech prosody, keyword usage, and even such simple techniques as PCD 100 asking a user how they are feeling. PCD 100 will attempt to user verification techniques (such as asking) to correct his interpretations and make a better emotional model of the user over time. This may also involve "crowd sourcing" learning data (verified sensor data <-> emotional state mappings from other users) from the PCD 100 cloud. With reference to FIG. 9, there is illustrated a flowchart 900 of an exemplary and non-limiting embodiment. At step 902, PCD 100 interprets user body/facial/speech details to determine his emotional state. Over time, PCD 100 is able to accurately interpret user body/facial/speech details to determine the emotional state.
[00247] Once PCD 100 has determined the emotional state of the user, he reports this out to others at step 904. This can be done in a number of ways. To caregivers that are co-located (in hospital setting, for instance), PCD 100 can use a combination of lighting/face graphics/posture to indicate the mood of the person he belongs to, so that a caregiver could see at a glance that the person under care was sad/happy /angry /etc. and intervene (or not) accordingly.
[00248] To caregivers who are not co-located (for example, an adult taking care of an aging parent who still lives alone), PCD 100 could provide this emotional state data through a mobile/web app that is customizable in terms of which data it presents and for which time periods.
[00249] Once this understanding of a user's mood is established, PCD 100 tries and effects a change in that mood, at step 906. This could happen autonomously, wherein PCD 100 tries to bring about a positive change in user emotional state through a process of story/joke telling, commiseration, game playing, emotional mirroring, etc. Alternatively, a caregiver, upon being alerted by PCD 100 that the primary user is in a negative emotional state, could instruct PCD 100 to say /try/do certain things that they may know will alleviate negative emotions in this particular circumstance.
[00250] Night Light
[00251] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a Night Light feature. In accordance with this feature, PCD 100 may act as an animated nightlight if the user wakes in the middle of the night. If the right conditions are met (e.g., time is in the middle of the night, ambient light is very low, there has been stillness and silence or sleeping noises for a long time, and then suddenly there is movement or speaking), PCD 100 may wake gently, light a pompom in a soothing color, and perhaps inquire if the user is OK. In some embodiments, PCD 100 may suggest an activity or app that might be soothing and help return the user to sleep.
[00252] Random Acts of Cuteness
[00253] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a Random Acts of Cuteness feature. In accordance with this feature, PCD 100 may operate to say things/asking questions throughout the day at various times in a manner designed to be delightful or thought provoking. In one embodiment, this functionality does not involve free form natural language conversation with PCD 100, but, rather, PCD's 100 ability to say things that are interesting, cute, funny, etc. as fodder for thought/conversation.
[00254] In some embodiments PCD 100 may access a database, either internal to PCD 100 or located externally, of sayings, phrases, jokes, etc., that is created, maintained, and refreshed from time to time. Data may come from, for example, weather, sports, news, etc. RSS feeds, crowd sourcing from other PCD 100s, and user profiles. Through a process of metatagging these bits and comparing the metatags to individual PCD 100 user preferences, the appropriate fact or saying may be sent to every individual PCD 100.
[00255] When PCD 100 decides to deliver a Random Act of Cuteness, PCD 100 may connect to the cloud, give a user ID, etc., and request a bit from the data repository. As described above, the server will match a fact to the user preferences, day/date/time, weather in the user's home area, etc., to determine the best bit to deliver to that user.
[00256] In some embodiments, this feature may function to take the form of a simple question where the question is specific enough to make recognition of the answer easier while the answers to such questions may be used to help build the profile of that user thus ensuring more fitting bits delivered to his/her PCD 100 at the right times. In other embodiments, a user may specifically request an Act of Cuteness through a simple touch interface or through a natural language interface. In some embodiments, this feature may employ a "like/dislike" user feedback solicitation so as to enable the algorithm to get better at providing bits of interest to this particular user.
[00257] DJ PCD
[00258] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a DJ feature. In accordance with this feature, PCD 100 may operate to feature music playing, dancing, and suggestions from PCD 100. This feature may operate in several modes. Such modes or functions may be accessed and controlled through a simple touch interface (no more than 2 beats from beginning to desired action), or, in other embodiments, through a natural language interface. Music may be stored locally or received from an external source.
[00259] When PCD 100 plays a song using this feature, PCD 100 may use beat tracking to accompany the song with dance animations, lighting/color shows, facial expressions, etc. PCD's 100 choice of song may depend on which mode is selected such as:
[00260] Jukebox Mode
[00261] In this mode, PCD 100 may play a specific song, artist, or album that the user selects.
[00262] Moodbo Mode
[00263] In this mode, the user requests a song of a certain mood. PCD 100 may use mood metatags to select a song. The user can give feedback on songs similar to Pandora, allowing PCD 100 to tailor weightings for future selections.
[00264] Ambient Music Mode
[00265] Once a user selects this mode, PCD 100 uses information from the web (date, day of the week, time of day, calendar events, weather outside, etc.) as well as from sensors 102, 104, 106, 108, 112 (e.g., number/activity level of people in the room, noise levels, etc.) to select songs to play and volumes to play them at, in order to create background ambience in the room.
Users may have the ability to control volume or skip a song. In addition, users may be able to request a specific song at any time, without leaving ambient music mode. The requested song might be played, and the user choice (as with volume changes) might be used in future selection weightings.
[00266] PCD Likes
[00267] While in some embodiments a user may directly access this mode ("what kind of music do you like, PCD?"), PCD 100 may also occasionally interject one or more choices into a stream of songs, or try to play a choice upon initiation of Jukebox or Moodbox Mode (in ambient music mode, PCD 100 may NOT do this). PCD's music choices may be based on regularly updated lists from PCD 100, Inc., created by writers or by, for instance, crowd sourcing song selections from other PCDs. PCD 100 Likes might also pull a specific song from a specific PCD 100 in the user's network - for instance PCD 100 may announce 'Your daughter is requesting this song all the time now!", and then play the daughter's favorite song.
[00268] Dancing PCD
[00269] In accordance with exemplary and non-limiting embodiments, after playing a song in any mode, PCD 100 may ask how it did (and might respond appropriately happy or sad depending on the user's answer), or may give the user a score on how well the user danced. PCD 100 may also capture photos of a user dancing and offer to upload them to a user's PCD profile, a social media site, or email them. Various modes of functionality include:
[00270] Copy You
[00271] In this mode, PCD 100 chooses a song to play, and then uses sound location/face/skeleton tracking to acquire the user in the vis/RGBD camera field of view. As the user dances along to the music, PCD 100 may try to imitate the user's dance. If the user fails to keep time with the music, the music may slow down or speed up. At the end of the song, PCD 100 may ask how it performed in copying the moves of the user, or give the user a score on how well the user kept the beat. PCD 100 may also capture photos of the user dancing and offer to upload them to the user's PCD profile, a social media site, or email them to the user.
[00272] Copy PCD
[00273] In this mode, PCD 100 dances and the user tries to imitate the dance. Again, the playback of music is affected if the user is not doing a good job. In some embodiments, a separate screen shows a human dancer for both a user and PCD 100 to imitate. The user and PCD 100 both do their dance-alongs and then PCD 100 grades both itself and the user.
[00274] Dance Along
[00275] In this mode, the user plays music from a radio, iPod, singing, humming, etc., and PCD 100 tries to dance along, asking how well it did at the end.
[00276] Story Acting/Animating
[00277] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured with a Story Acting/Animating feature. In accordance with this feature, PCD 100 may operate to allow a user to purchase plays for an interactive performance with PCD 100. With reference to FIG. 10, there is illustrated a flowchart 1000 of an exemplary and non- limiting embodiment. The plays may be purchased outright and stored in the user's PCD Cloud profile, or they may be rented Netflix style, at step 1002.
[00278] Purchasing of plays/scenes may occur through, for example, an Android app or web interface, where a virtual PCD 100 may guide the user through the purchase and installation process. In some embodiments, at step 1004, users may select the play/scene they want to perform. This selection, as well as control of the feature while using it, may be accomplished via a simple touch interface (either PCD's 100 eye or body), or via a natural language interface. Once a user selects a play, PCD 100 may ask whether the user wants to rehearse or perform at step 1006, which will dictate the mode to be entered.
[00279] Regardless of mode chosen, at step 1008, PCD 100 may begin by asking the user which character they want to be in the play. After this first time, PCD 100 will verify that choice if the play is selected again, and the user can change at any time. [00280] Rehearsal Mode
[00281] Once the user has entered rehearsal mode, PCD 100 may offer to perform the play in order to familiarize the user with the play, at step 1010. The user may skip this if they are already familiar. If the user does want PCD 100 to perform the play, PCD 100 may highlight the lines for the user's role as the user performs a read through, at step 1012.
[00282] Following this read through, PCD 100 may begin to teach lines to the user, at step 1014. For each line, PCD 100 may announce the prompt and the line, and then show the words on touch screen 104 while the user recites the line. PCD 100 may use speech recognition to determine if the user is correct, and will keep trying until the user repeats the line correctly. PCD 100 may then offer the prompt to the user and let them repeat the line, again trying until the user can repeat the line appropriately to the prompt. PCD 100 may then move to the next line.
[00283] Once the user has learned all lines, at step 1016, PCD 100 will do a run through with all prompts, checking for the proper line in response and prompting the user if necessary.
[00284] Note that prompts can take the form of graphical at first, with the eye morphing into a shape that suggests the line. This might be the first attempt at a prompt, and if the user still cannot remember the line, then PCD 100 can progress to verbal prompting.
[00285] Performance Mode
[00286] Once a user has memorized all the lines for the character they wish to portray, they can enter Performance Mode, at step 1018. In this mode, PCD 100 will do a full up performance of the play, pausing to let the user say their lines and prompting if the user stumbles or forgets. PCD 100 will use full sound effects, background music, animations, and lighting effects during this performance, even during user-delivered lines. In some embodiments, after the play is performed, PCD 100 may generate a cartoon/animated version of the play, with the user's voice audio during their lines included and synced to the mouth of the character they play (if that is possible). This cartoon may be stored on the PCD cloud, posted to social media sites, or emailed to user for sharing/memory making. In some embodiments, PCD 100 may also be configured to perform plays with multiple participants each playing their own character, and participants may be remote (e.g., on the other end of a teleflow).
[00287] Dancing PCD - Sharing
[00288] In accordance with an exemplary and non-limiting embodiment, PCD 100 may be configured to employ an additional feature of the Dancing PCD app described above. In some embodiments of this feature, a user may create a custom dance for PCD 100. This is created through a mobile or web app, allowing the user to pick the song and select dance moves to put together for PCD 100 to perform with the music. User may also let PCD 100 pick a dance move such that the dance is created collaboratively with PCD 100. In some embodiments, lighting/sound effects (e.g., PCD saying "get down! ") may be added and synced with the dance. In other embodiments, PCD 100 dances may be sent to other PCDs 100, shown to friends performed by the virtual PCD 100, saved online, etc. The user may also play other PCD 100 dances created by other PCD 100 users.
[00289] Celebrity Generated Content
[00290] In accordance with exemplary and non-limiting embodiments, this feature allows the user to download or stream to their PCD 100 celebrity generated content. Content is chosen through a web interface or Android app, where a Virtual PCD 100 may guide the user through the process of content purchase. Content may be either:
[00291] Prerecorded
[00292] This might include director/actor commentary for movies, Mystery Science Theater 3000 type jokes, etc. All content may be cued to a film. Audio watermarking may be used to sync PCD 100's delivery of content with the media being watched.
[00293] Live Streaming
[00294] In this mode, PCD 100 may stream content that is being generated real time by a celebrity /pundit in a central location. The content creator may also have the ability to real-time "puppet" PCD 100 to achieve animations/lighting/color effects to complement the spoken word. In such instances, no audio watermarking is necessary as the content creator will theoretically be watching event concurrently with user and making commentary real time. This might include political pundits offering commentary on presidential speeches, election coverage, etc., or a user's favorite athlete providing commentary on a sporting event.
[00295] In accordance with an exemplary and non-limiting embodiment, a persistent companion device (PCD) 100 is adapted to reside continually, or near continually, within the environment of a person or persons. In one embodiment, the person is a particular instance of a person for which various parametric data identifying the person is acquired by or made available to the PCD. As described more fully below, in addition to a person's ID, PCD 100 may further recognize patterns in behavior (schedules, routines, habits, etc.), preferences, attitudes, goals, tasks, etc.
[00296] The identifying parametric data may be used to identify the presence of the person using, for example, voice recognition, facial recognition and the like utilizing one or more of the sensors 102, 104, 106, 108, 112 described above. The parametric data may be stored locally, such as within a memory of PCD 100, or remotely on a server with which PCD 100 is in wired or wireless communication such as via Bluetooth, Wi-Fi and the like. Such parametric data may be inputted into PCD 100 or server manually or may be acquired by the PCD 100 over time or as part of an initialization process.
[00297] For example, upon bringing an otherwise uninitialized PCD 100 into the environment of a user, a user may perform an initialization procedure whereby the PCD 100 is operated/interacted with to acquire an example of the user's voice, facial features or the like (and other relevant factual info). In a family hub embodiment described mire fully below, there may be a plurality if users forming a social network of users comprising an extended family. This data may be stored within the PCD 100 and may be likewise communicated by the PCD 100 for external storage such as, for example, at server. Other identifying user data, such as user name, user date of birth, user eye color, user hair color, user weight and the like may be manually entered such as via a graphical user interface, speech interface, of server or forming a part of PCD 100. Once a portion of the parametric data is entered into or otherwise acquired by PCD 100, PCD 100 may operate to additionally acquire other parametric data. For example, upon performing initialization comprising providing a sample voice signature, such as by reciting a predetermined text to PCD 100, PCD 100 may autonomously operate to identify the speaking user and acquire facial feature data required for facial identification. As PCD 100 maintains a persistent presence within the environment of the user, PCD 100 may operate over time to acquire various parametric data of the user.
[00298] In some embodiments, during initialization PCD 100 operates to obtain relevant information about a person beyond their ID. As noted above, PCD 100 may operate to acquire background info, demographic info, likes, contact information (email, cell phone, etc.), interests, preferences, personality, and the like. In such instances, PCD 100 may operate to acquire text based/GUI/speech entered information such as during a "getting acquainted" interaction. In addition, PCD 100 may also operate to acquire contact info and personalized parameterized information of the family hub (e.g., elder parent, child, etc.), which may be shared between PCDs 100 as well as entered directly into a PCD 100. In various embodiments described more fully below, PCD 100 operates to facilitate family connection with the extended family. As further described below, daily information including, but not limited to, a person's schedule, events, mood, and the like may provide important context for how PCD 100 interacts, recommends, offers activities, offers information, and the like to the user.
[00299] In accordance with exemplary and non-limiting embodiments, contextual, longitudinal data acquired by PCD 100 facilitates an adaptive system that configures its functions and features to become increasingly tailored to the interests, preferences, and use cases of the user(s). For instance, if the PCD 100 learns that a user likes music, it can automatically download the "music attribute" from the cloud to be able to discover music likes, play music of that kind, and make informed music recommendations.
[00300] In this way, PCD 100 learns about a user's life. PCD 100 can sense the user in the real world and it can gather data from the ecology of other devices, technologies, systems, personal computing devices, personal electronic devices that are connected to the PCD 100. From this collection of longitudinal data, the PCD 100 learns about the person and the patterns of activities that enable it to learn about the user and to configure itself to be better adapted and matched to the functions it can provide. Importantly, PCD 100 learns about your social/family patterns, Who the important people are in your life (your extended family), it learns about and tracks your emotions/moods, it learns about important behavioral patterns (when you tend to do certain things), it learns your preferences, likes, etc., it learns what you want to know about, what entertains you, etc.
[00301] As described more fully below, PCD 100 is configured to interact with a user to provide a longitudinal data collection facility for collecting data about the interactions of the user of PCD 100 with PCD 100.
[00302] In accordance with exemplary and non-limiting embodiments, PCD 100 is configured to acquire longitudinal data comprising one or more attributes of persistent interaction with a user via interaction involving visual, auditory and tactile sensors 102, 104, 106, 108, 112. In each instance, visual, auditory and tactile sensations may be perceived or otherwise acquired by PCD 100 from the user as well as conveyed by PCD 100 to the user. For example, PCD 100 may incorporate camera sensor 106 to acquire visual information from a user including data related to the activities, emotional state and medical condition of the user. Likewise, PCD 100 may incorporate audio sensor 112 to acquire audio information from a user including data derived from speech recognition, data related to stress levels as well as contextual information such as the identity of entertainment media utilized by the user. PCD 100 may further incorporate tactile sensor 102 to acquire tactile information from a user including data related to a user's touching or engaging in physical contact with PCD 100 including, but limited to, petting and hugging PCD 100. In other embodiments, a user may also use touch to navigate a touch screen interface of PCD 100. In other embodiments, a location of PCD 100 or a user may be determined, such as via a cell phone the user is carrying and used as input to give location context-relevant information and provide services. [00303] As noted, visual, auditory and tactile sensations may be conveyed by PCD 100 to the user. For example, audio output device may be used to output sounds, alarms, music, voice instructions and the like and to engage in conversation with a user. Similarly, graphical element may be utilized to convey text and images to a user as well as operate to convey graphical data comprising a portion of a communication interaction between PCD 100 and the user. It can use ambient light and other cues (its LED pom-pom). Tactile device 102 may be used to convey PCD 100 emotional states and various other data including, via, for example, vibrating, and to navigate the interface/content of the device. The device may emit different scents that suit the situation, mood, etc. of the user.
[00304] Information may be gathered through different devices that are connected to the PCD 100. This could come from third party systems data (e.g., medical, home security, etc.), mobile device data (music playlists, photos, search history, calendar, contact lists, videos, etc.), desktop computer data (esp. entered through the PCD 100 portal), or the like.
[00305] In addition to the sensors described above, data and information involved in interactions between PCD 100 and a user may be acquired from, stored on and outputted to various data sources. In exemplary and non-limiting embodiments, interaction data may be stored on and transmitted between PCD 100 and a user via cloud data or other modes of connectivity (Bluetooth, etc.). In one embodiment, access may be enabled by PCD 100 to a user's cloud stored data to enable interaction with PCD 100. For example, PCD 100 may search the internet, use an app/service, or access data from the cloud - such as a user's schedule from cloud storage and use information derived there from to trigger interactions. As one example, PCD 100 may note that a user has a breakfast appointment with a friend at 9:00 am at a nearby restaurant. If PCD 100 notices that the user is present at home five minutes before the appointment, PCD 100 may interact with the user by speaking via audio device 110 to query if the user shouldn't be getting ready to leave. In an exemplary embodiment, PCD 100 may accomplish this feat by autonomously performing a time of travel computation based on present GPS coordinates and those of the restaurant. In this manner, PCD 100 may apply one or more algorithms to accessed online or cloud data to trigger actions that result in rapport building interactions between PCD 100 and the user. People can communicate with PCD 100 via social networking, real-time or asynchronous methods, such as sending texts, establishing a real-time audio-visual connection, connecting through other apps/services (Facebook, twitter, etc.), and the like. Other examples include access by the PCD 100 to entertainment and media files of the user stored in the cloud including, but not limited to iTunes and Netflix data that may be used to trigger interactions. [00306] In a similar manner, in accordance with other exemplary embodiments, interaction data may be stored in proximity to or in a user's environment such as on a server or personal computer or mobile device, and may be accessible by the user. PCD 100 may likewise store data in the cloud. In other embodiments, interaction data may be acquired via sensors external to PCD 100.
[00307] In accordance with exemplary and non-limiting embodiments, there may be generated and activities log and a device usage log, such as may be stored on PCD 100, on a server or in the cloud, which may be utilized to facilitate interaction. Activities log may store information recording activities engaged in by the user, by PCD 100 or by both the user and PCD 100 in an interactive manner. For example, an activities log may record instances of PCD 100 and the user engaging in the game of chess. There may additionally be stored information regarding the user's emotional state during such matches from which may be inferred the user's level of enjoyment. Using this data, PCD 100 may determine such things as how often the user desires to play chess, how long has it been since PCD 100 and the user last played chess, the likelihood of the user desiring to engage in a chess match and the like. In a similar manner, a device usage log may be stored and maintained that indicates when, how often and how the user prefers to interact with PCD 100. As is evident, both the activities log and the device usage log may be used to increase both the frequency and quality of interactions between PCD 100 and the user.
[00308] In accordance with an exemplary and non-limiting embodiment, interaction data may be acquired via manual entry. Such data may be entered by the user directly into PCD 100 via input devices 102, 104, 106, 108, 112 forming a part of PCD 100 or into a computing device, such as a server, PDA, personal computer and the like, and transmitted or otherwise communicated to PCD 100, such as via Bluetooth or Wi-Fi/cloud. In other embodiments, interaction data may be acquired by PCD 100 via a dialog between PCD 100 and the user. For example, PCD 100 may engage in a dialog with the user comprising a series of questions with the user's answers converted to text via speech recognition software operating on PCD 100, on a server or in the cloud, with the results stored as interaction data. Similarly for GUI or touch- based interaction.
[00309] In accordance with an exemplary and non-limiting embodiment, interaction data may be generated via a sensor 102, 104, 106, 108, 112 configured to identify olfactory data. Likewise PCD 100 may be configured to emit olfactory scents. In yet other embodiments, GPS and other location determining apparatus may incorporated into PCD 100 to enhance interaction. For example, a child user may take his PCD 100 on a family road trip or vacation. While in transit, PCD 100 may determine its geographic location, access the internet to determine nearby landmarks and engage in a dialogue with the child that is relevant to the time and place by discussing the landmarks.
[00310] In addition to ascertaining topics for discussion in this manner, in some embodiments, the results of such interactions may be transmitted at the time or at a later time to a remote storage facility whereat there is accumulated interaction data so acquired from a plurality of users in accordance with predefined security settings. In this manner, a centralized database of preferable modes of interaction may be developed based on a statistical profile of a user's attributes and PCD 100 acquired data, such as location. For instance, in the previous example, PCD 100 may determine its location as being on the National Mall near the Air and Space Museum and opposite the Museum of Natural History. By accessing a centralized database and providing the user's age and location, it may be determined that other children matching the user's age profile tend to be interested in dinosaurs. As a result, PCD 100 commences to engage in a discussion of dinosaurs while directing the user to the Museum of Natural History.
[00311] In accordance with an exemplary and non-limiting embodiment, PCD 100 may modulate aspects of interaction with a user based, at least in part, upon various physiological and physical attributes and parameters of the user. In some embodiments, PCD 100 may employ gaze tracking to determine the direction of a user's gaze. Such information may be used, for example, to determine a user's interest or to gauge evasiveness. Likewise, a user's heart rate and breathing rate may be acquired. In yet other embodiment's a user's skin tone may be determined from visual sensor data and utilized to ascertain a physical or emotional state of the user. Other behavioral attributes of a user that may be ascertained via sensors 102, 104, 106, 108, 112 include, but are not limited to, vocal prosody and word choice. In other exemplary embodiments, PCD 100 may ascertain and interpret physical gestures of a user, such as waving or pointing, which may be subsequently utilized as triggers for interaction. Likewise, a user's posture may be assessed and analyzed by PCD 100 to determine if the user is standing, slouching, reclining and the like.
[00312] In accordance with various exemplary and non-limiting embodiments, interaction between PCD 100 and a user may be based, at least in part, upon a determined emotional or mental state or attribute of the user. For example, PCD 100 may determine and record the rate at which a user is blinking, whether the user is smiling or biting his/her lip, the presence of user emitted laughter and the like to ascertain whether the user is likely to be, for example, nervous, happy, worried, amused, etc. Similarly, PCD 100 may observe a user's gaze being fixated on a point in space while the user remains relatively motionless and silent in an otherwise silent environment and determine that the user is in a state of thought or confused. In yet other embodiments, PCD 100 may interpret user gestures such as nodding or shaking one's head as indications of mental agreement or disagreement.
[00313] In accordance with an exemplary and non-limiting embodiment, the general attributes of the interface via which a user interacts may be configured and/or coordinated to provide an anthropomorphic or non-human based PCD 100. In one embodiment, PCD 100 is configured to display the characteristics of a non-human animal. By so doing, interaction between PCD 100 and a user may be enhanced by mimicking and/or amplifying an existing emotional predilection by a user for a particular animal. For example, PCD 100 may imitate a dog by barking when operating to convey an excited state. PCD 100 may further be fitted with a tail like appendage that may wag in response to user interactions. Likewise, PCD 100 may output sounds similar to the familiar feline "meow" . In addition to the real time manifestations of a PCD 100 interface, such interface attributes may vary over time to further enhance interaction by adjusting the aging process of the user and PCD 100 animal character. For example, a PCD 100 character based on a dog may mimic the actions of a puppy when first acquired and gradually mature in its behaviors and interactions to provide a sense on the part of the user that the relationship of the user and the PCD character is evolving.
[00314] As noted, in addition to PCD characteristics based on animals or fictional creatures, PCD 100 may be configured to provide an anthropomorphic interface modeled on a human being. Such a human being, or "persona", may be pre -configured, user definable or some combination of the two. This may include impersonations where PCD 100 may take on the mannerisms and characteristics of a celebrity, media personality or character (e.g., Larry Bird, Jon Stewart, a character from Downton Abby, etc.). The persona, or "digital soul", of PCD 100 may be stored (e.g. in the cloud), in addition to being resident on PCD 100, external to PCD 100 and may therefore be downloaded and installed on other PCDs 100. These other PCDs can be graphical (e.g., its likeness appears on the users mobile device) or into another physical PCD 100 (e.g., a new model).
[00315] The Persona of PCD 100 can also be of a synthetic or technological nature. As a result, PCD 100 functions as personified technology wherein device PCD 100 is seen to have its own unique persona, rather than trying to emulate something else that already exists such as a person, animal, known character and the like. In some embodiments, proprietary personas may be created for PCD 100 that can be adapted and modified over time to better suit its user. For example, the prosody of a user's PCD 100 may adapt over time to mirror more closely that of its user's own prosody as such techniques build affinity and affection. PCD 100 may also change its graphical appearance to adapt to the likes and preferences of its user in addition to any cosmetic or virtual artifacts its user buys to personalize or customize PCD 100.
[00316] In an exemplary embodiment, the digital soul of PCD 100 defines characteristics and attributes of the interface of PCD 100 as well as attributes that affect the nature of interactions between user and PCD 100. While this digital soul is bifurcated from the interaction data and information utilized by PCD 100 to engage in interaction with a user, the digital soul may change over time in response interaction with particular users. For example to separate users each with their own PCD 100 may install an identical digital soul based, for example, on a well know historical figure, such as Albert Einstein. From the moment of installation on the two separate PCDs 100, each PCD 100 will interact in a different manner depending on the user specific interaction data generated by and accessible to PCD 100. The Digital Soul can be embodied in a number of forms, from different physical forms (e.g., robotic forms) or digital forms (e.g., graphical avatars).
[00317] In accordance with an exemplary and non-limiting embodiment, PCD 100 provides a machine learning facility for improving the quality of the interactions based on collected data. The algorithms utilized to perform the machine learning may take place on PCD 100, on a computing platform in communication with PCD 100. In an exemplary embodiment, PCD 100 may employ association conditioning in order to interact with a user to provide coaching and training. Association, or "operant" conditioning focuses on using reinforcement to increase a behavior. Through this process, an association is formed between the behavior and the consequences for that behavior. For example, PCD 100 may emit a happy noise when a user wakes up quickly and hops out of bed as opposed to remaining stationary. Over time, this interaction between PCD 100 and the user operates to motivate the user to rise more quickly as the user associates PCDs 100 apparent state of happiness with such an action. In another example, PCD 100 may emit encouraging sounds or words when it is observed that the user is exercising. In such an instance PCD 100 serves to provide persistent positive reinforcement for actions desired by the user.
[00318] In accordance with various exemplary embodiments, PCD 100 may employ one of a plurality of types of analysis known in the art when performing machine learning including, but not limited to temporal pattern modeling and recognition, user preference modeling, feature classification, task/policy modeling and reinforcement learning.
[00319] In accordance with exemplary and non-limiting embodiments, PCD 100 may employ a visual, audio, kinesthetic, or "VAK", model for identifying a mode of interaction best suited to interacting with a user. PCD 100 may operate to determine the dominant learning style of a user. Fr example, if PCD 100 determines that a user processes information in a predominantly visual manner, PCD 100 may employ charts or illustrations, such as on a graphic display 104 forming a part of PCD 100 to convey information to the user. Likewise, PCD 100 may operate to issue questions and other prompts to a user to help them stay alert in auditory environments.
[00320] Likewise, if PCD 100 determines that a user processes information in a predominantly auditory manner, PCD 100 may commence new interactions with a brief explanation of what is coming and may conclude with a summary of what has transpired. Lastly, if PCD 100 determines that a user processes information in a predominantly kinesthetic manner, PCD 100 may operate to interact with the user via kinesthetic and tactile interactions involving movement and touch. For example, to get a user up and active in the morning, PCD 100 may engage in an activity wherein PCD 100 requests a hug from the user. In other embodiments, to highlight and reinforce an element of a social interaction, PCD 100 may emit a scent related to the interaction.
[00321] The ability to move PCD 100 around the house is an important aspect as PCD 100. In operation, PCD 100 operates to give a remote person a physically embodied and physically socially expressive way to communicate that allows people to "stay in the flow of their life" rather than having to stop and huddle in front of a screen (modern video conferencing). As a result, PCD 100 provides support for casual interactions, as though a user were visiting someone in their house. A user may be doing other activities, such as washing dishes, etc. and still be carrying on a conversation because of how the PCD 100 can track the user around the room. In exemplary embodiments described above, PCD 100 is designed to have its sensors and outputs carry across a room, etc. Core technical aspects include
[00322] A user may control the PCD 100's camera view, and it can also help to automate this by tracking and doing the inverse kinematics to keep its camera on the target object.
[00323] PCD 100 may render a representation of you (video stream, graphics, etc.) to the screen in a way that preserves important non-verbal cues like eyecontact.
[00324] PCD 100 may mirror the remote person's head pose, body posture so that person has an expressive physical presence. PCD 100 may also generate its own expressive body movements to suit the situation, such as postural mirroring and synchrony to build rapport.
[00325] PCD 100 may further trigger fun animations and sounds. So a user may either try to convey yourself accurately as you, or as a fun character. This is really useful for connected story reading, where a grandma can read a story remotely with her grandchild, while taking on different characters during the story session. [00326] PCD 100 may track who is speaking to automatically shift its gaze/your camera view to the speaker (to reduce the cognitive load in having to manually control the PCD 100)
[00327] PCD 100 may have a sliding autonomy interface so that the remote user can assert more or less direct control over the PCD 100, and it can use autonomy to supplement.
[00328] PCD 100 may provide a user with a wide field of view (much better than the tunnel vision other devices provide/assume because you have to stay in front of it)
[00329] By doing all these things, and being able to put PCD 100 in different places around the house, the remote person feels that now they not only can communicate, but can participate in an activity. To be able to share a story at bedtime, be in the playroom and play with grandkids, participate in thanksgiving dinner remotely, sit on the countertop as you help your daughter cook the family recipe, etc. It supports hands free operation so you feel like you have a real physical social presence elsewhere.
[00330] In accordance with exemplary and non-limiting embodiments, PCD 100 may be configured or adapted to be positioned in a stable or balanced manner on or about a variety of surfaces typical of the environment in which a user lives and operates. For example, generally planar surfaces of PCD 100 may be fabricated from or incorporate, at least in part, friction pads which operate to prevent sliding of PCD 100 on smooth surfaces. In other embodiments, PCD 100 may employ partially detachable or telescoping appendages that may be either manually or automatically deployed to position PCD 100 on uneven surfaces. In other embodiments, the device may have hardware accessories that enable it to locomote in the environment or manipulate objects. It may be equipped with a laser pointer or projector to be able to display on external surfaces or objects. In such instances, PCD 100 may incorporate friction pads on or near the extremities of the appendages to further reduce slipping. In yet other embodiments, PCD 100 may incorporate one or more suction cups on an exterior surface or surfaces of PCD 100 for temporary attachment to a surface. In yet other embodiments, PCD 100 may incorporate hooks, loops and the like for securing PCD 100 in place and/or hanging PCD 100.
[00331] In other exemplary embodiments, PCD 100 is adapted to be portable by hand. Specifically, PCD 100 is configured to weigh less than 10 kg and occupy a volume of no more than 4,000 cm3. Further, PCD 100 may include an attached or detachable strap or handle for use in carrying PCD 100.
[00332] In accordance with exemplary and non-limiting embodiments, PCD 100 is configured to be persistently aware of, or capable of determining via computation, the presence or occurrence of social cues and to be socially present. As such, PCD 100 may operate so as to avoid periods of complete shutdown. In some embodiments, PCD 100 may periodically enter into a low power state, or "sleep state", to conserve power. During such a sleep state, PCD 100 may operate to process a reduced set of inputs likely to alert PCD 100 to the presence of social cues, such as a person or user entering the vicinity of PCD 100, the sound of a human voice and the like. When PCD 100 detects the presence of a person or user with whom PCD 100 is capable of interacting, PCD 100 may transition to a fully alert mode wherein more or all of PCDs 100 sensor inputs are utilized for receiving and processing contextual data.
[00333] The ability to remain persistently aware of social cues reduces the need for PCD 100 to ever be powered off or manually powered on. As the ability to be turned off and on is an attributed associated with machine devices, the ability of PCD 100 to avoid being in a fully powered down mode serves to increase the perception that PCD 100 is a living companion. In some embodiments, PCD 100 may augment being in a sleep state by emitting white noise or sounds mimicking snoring. In such an instance, when a user comes upon PCD 100, PCD 100 senses the presence of the user and proceeds to transition to a fully alert or powered up mode by, for example, greeting the user with a noise indicative of waking up, such as a yawn. Such actions serve as queues to begin interactions between PCD 100 and a user.
[00334] In accordance with exemplary and non-limiting embodiments, PCD 100 is adapted to monitor, track and characterize verbal and nonverbal signals and cues from a user. Examples of such cues include, but are not limited to, gesture, gaze direction, word choice, vocal prosody, body posture, facial expression, emotional cues, touch and the like. All such cues may be captured by PCD 100 via sensor devices 102, 104, 106, 108, 112. PCD 100 may further be configured to adapt and adjust its behavior to effectively mimic or mirror the captured cues. By so doing, PCD 100 increases rapport between PCD 100 and a user by seeming to reflect the characteristics and mental states of the user. Such mirroring may be incorporated into the personality or digital soul of PCD 100 for long-term projection of said characteristics by PCD 100 or may be temporary and extend, for example, over a period of time encompassing a particular social interaction.
[00335] For example, if PCD 100 detects that a user periodically uses a particular phrase, PCD 100 may add the phrase to the corpus of interaction data for persistent use by PCD 100 when interacting with the user in the future. Similarly, PCD 100 may mimic transient verbal and non-verbal gestures in real or near real time. For example, if PCD 100 detects is raised frequency of a user's voice coupled with an increased word rate indicative of excitement, PCD 100 may commence to interact verbally with the user in a higher than normal frequency with an increased word rate. [00336] In accordance with exemplary and non-limiting embodiments, PCD 100 may project a distinct persona or digital soul via various physical manifestations forming a part of PCD 100 including, but not limited to, body form factor, physical movements, graphics and sound. In one embodiment, PCD 100 may employ expressive mechanics. For example, PCD 100 may incorporate a movable jaw appendage that may be activated when speaking via the output of an audio signal. Such an appendage may be granted a number of degrees of freedom sufficient to mimic a smile or a frown as appropriate. Similarly, PCD 100 may be configured with one or more "eye like" accessories capable of changing a degree of visual exposure. As a result, PCD 100 can display a "wide eyed" expression in response to being startled, surprised, interested and the like.
[00337] In accordance with exemplary and non-limiting embodiments, PCD 100 may detect its posture or position in space to transition between, for example, a screen mode and an overall mode. For example, if PCD 100 incorporates a screen 104 for displaying graphical information, PCD 100 may transition from whatever state it is in to a mode that outputs information to the screen when a user holds the screen up to the user's face and into a position from which the user can view the display.
[00338] In accordance with another embodiment, one or more pressure sensors forming a part of PCD 100 may detect when a user is touching PCD 100 in a social manner. For example, PCD 100 may determine from the pattern in which more than one pressure sensors are experiencing pressure that a user is stroking, petting or patting PCD 100. Different detected modes of social touch may serve as triggers to PCD 100 to exhibit interactive behaviors that encourage or inhibit social interaction with the user.
[00339] In accordance with exemplary and non-limiting embodiments, PCD 100 may be fitted with accessories to enhance the look and feel of PCD 100. Such accessories include, but are not limited to, skins, costumes, both internal and external lights, masks and the like.
[00340] As described above, the persona or digital soul of PCD 100 may be bifurcated from the physical manifestation of PCD 100. The attributes comprising a PCD 100 persona may be stored as digital data which may be transferred and communicated, such as via Bluetooth or Wi-Fi to one or more other computing devices including, but not limited to, a server and a personal computing device. In such a context, a personal computing device can be any device utilizing a processor and stored memory to execute a series of programmable steps. In some embodiments, the digital soul of PCD 100 may be transferred to a consumer accessory such as a watch or a mobile phone. In such an instance, the persona of PCD 100 may be effectively and temporarily transferred to another device. In some embodiments, while transferred, the transferred instance of PCD 100 may continue to sense the environment of the user, engage in social interaction, and retrieve and output interaction data. Such interaction data may be transferred to PCD 100 at a later time or uploaded to a server for later retrieval by PCD 100.
[00341] In accordance with exemplary and non-limiting embodiments, PCD 100 may exhibit visual patterns, which adjust in response to social cues. For example, display 104 may emit red light when excited and blue light when calm. Likewise, display 104 may display animated confetti falling in order to convey jubilation such as when a user completes a task successfully. In some embodiments, the textures and animations for display may be user selectable or programmable either directly into PCD 100 or into a server or external device in communication with PCD 100. In yet other embodiments, PCD 100 may emit a series of beeps and whistles to express simulated emotions. In some embodiments, the beeps and whistles may be patterned upon patterns derived from the speech and other verbal utterances of the user. In some instances, the beeps, whistles and other auditory outputs may serve as an auditory signature unique to PCD 100. In some embodiments, variants of the same auditory signature may be employed on a plurality of PCDs 100, such as a group of "related" PCDs 100 forming a simulated family, to indicate a degree of relatedness.
[00342] In some embodiments, PCD 100 may engage in anamorphic transitioning between modes of expression to convey an emotion. For example, PCD 100 may operate a display 104 to transition from a random or pseudorandom pattern or other graphic into a display of a smiling or frowning mouth as a method for displaying human emotion.
[00343] In other exemplary embodiments, PCD 100 may emit scents or pheromones to express emotional states.
[00344] In accordance with yet another exemplary embodiment, may be provided with a back story in the form of data accessible to PCD 100 that may for the basis of interactions with users. Such data may comprise one or more stories making reference to past events, both real and fictional, that form a part of PCDs 100 prior history. For example, PCD 100 may be provided with stories that may be conveyed to a user via speech generation that tell of past occurrences in the life of PCD 100. Such stories may be outputted upon request by a user of may be triggered by interaction data. For example, PCD 100 may discern from user data that today is the user's birthday. In response, PCD 100 may be triggered to share a story with the user related to a past birthday of PCD 100. Data comprising the back story may be centrally stored and downloaded to PCD 100 upon request by a user or autonomously by PCD 100.
[00345] Back stories may be generated and stored by a manufacturer of PCD 100 and made available to a user upon request. With reference to FIG. 11, there is illustrated a flowchart 1100 of an exemplary and non-limiting embodiment. In an example, at step 1102, a manufacturer may receive as input a request for a back-story for a PCD 100 modeled on a dog associated with a user interested in sports, particularly, baseball and the Boston Red Sox. In response, the manufacturer or third party back-story provider may generate a base back story, at step 1104. In an example, the story may comprise relatively generic dog stories augmented by more particular stories dealing with baseball to which are added details related to the Red Sox.
[00346] In some embodiments, at step 1106, the back-story may be encoded with variables that will allow for further real time customization by PCD 100. For example, a back story may be encoded in pseudo code such as: "Me and my brothers and sisters <for i==l to max_siblings, insert sibling_name[i]> were raised in In this manner, when read by PCD 100, the story may be read as including the name of other PCDs 100 configured as related to PCD 100.
[00347] In accordance with an exemplary and non-limiting embodiment, PCD 100 may be provided with an executable module or program for managing a co-nurturance feature of PCD 100 whereby the user is encouraged to care for the companion device. For example, a co- nurturance module may operate to play upon a user's innate impulse to care for a baby by commencing interaction with a user via behavior involving sounds, graphics, scents and the like associated with infants. Rapport between PCD 100 and a user may be further encouraged when a co-nurturance module operates to express a negative emotion such as sadness, loneliness and/or depression while soliciting actions from a user to alleviate the negative emotion. In this way, the user is encouraged to interact with PCD 100 to cheer up PCD 100.
[00348] In accordance with an exemplary and non-limiting embodiment, PCD 100 may include a module configured to access interaction data indicative of user attributes, interactions of the user of PCD 100 with PCD 100, and the environment of the user of PCD 100. With reference to FIG. 1200, there is illustrated a flowchart of an exemplary and non-limiting embodiment. At step 1202, the interaction data is accessed. At step, 1204, the interaction data may be stored in a centralized data collection facility. Once retrieved and stored, at step 1206, the interaction data may be utilized to anticipate a need state of the user. Once a need state is identified, it can be utilized to proactive ly address a user's needs without reliance on a schedule for performing an action, at step 1208. In some embodiments, a user's physical appearance, posture and the like may form the basis for identifying a need state. In some instances, the identification of a need state may be supplemented by schedule data, such as comprising a portion of interaction data. For example, a schedule may indicate that it is past time to fulfill a user's need to take a dose of antibiotics. PCD 100 may ascertain a user's need state, in part, from data derived from facial analysis and voice modulation analysis.
[00349] In accordance with exemplary and non-limiting embodiments, PCD 100 may be used as a messenger to relay a message from one person to another. Messages include, but are not limited to audio recordings of a sender's voice, PCD 100 relaying a message in character, dances/animations/sound clips used to enhance the message and songs.
[00350] Messages may be generated in a variety of ways. In one embodiment, PCD 100 is embodied as an app on a smart device. The sender may open the app, and selects a message and associated sounds, scheduling, etc. A virtual instance of PCD 100 in the app may walk the user through the process. In another embodiment, through direct interaction with PCD 100, a sender/user may instruct PCD 100, via a simple touch interface or a natural language interface, to tell another person something at some future time. For example a user might say "PCD, when my wife comes into the kitchen this morning, play her X song and tell her that I love her". Sender might also have PCD 100 record his/her voice to use as part of the message. In other embodiments, instead of a sender's PCD 100 delivering the message, the message may be delivered by a different PCD 100 at another location. In yet another embodiment, a user/sender can, for instance, tweet a message to a specific PCDs 100 hashtag, and PCD 100 will speak that message to the user/recipient. Emoticons may also be inserted into the message, prompting a canned animation/sound script to be acted out by PCD 100. Some exemplary emoticons are:
Figure imgf000065_0001
able 1: Emoticon Definitions [00351] In addition, messages may be scheduled to be sent later, at a particular date and time or under a certain set of circumstances (e.g., "the first time you see person X on Tuesday", or "when person Y wakes up on Wednesday, give them this message").
[00352] In other embodiments, PCD 100 may be used to generate messages for users who don't have PCDs. Such messages may be generated in the form of a weblink, and may incorporate a Virtual PCD 100 for delivering the message just as a physical PCD 100 would if the receiver had one.
[00353] As is therefore evident, PCD 100 may be configured to receive messages from persons, such as friends and family of the user, wherein the messages trigger actions related to emotions specified in the messages. For example, a person may text a message to a PCD 100 associated with a user within which is embedded an emoticon representing an emotion or social action that the sender of the message wishes to convey via PCD 100. For example, if a sender sends a message to PCD 100 reading "Missing you a lot OX", PCD 100 may, upon receiving the message, output, via a speech synthesizer, "In coming message from Robert reads 'Missing you a lot'" while simultaneously emitting a kissing sound, displaying puckered lips on a display or similar action. In this way, message senders may annotate their messages to take advantage of the expressive modalities by which PCD 100 may interact with a user.
[00354] With reference to FIG. 13, there is illustrated a flowchart and a respective method 1300 of an exemplary and non-limiting embodiment. The method comprises providing a persistent companion device (PCD) at step 1302. The method further comprises inputting at least one of a verbal and nonverbal signals from a user selected from the group consisting of gesture, gaze direction, word choice, vocal prosody, body posture, facial expression, emotional cues and touch, at step 1304. The method further comprises adjusting a behavior of the PCD to mirror the at least one of a verbal and nonverbal signals, at step 1306.
[00355] All the above attributes of the development platform, libraries, assets, PCD and the like may be extended to support other languages and cultures (localization).
[00356] With reference to FIG. 14, there is illustrated an exemplary and non-limiting embodiment of an example whereby PCD 100 may utilize a user interface to display a recurring, persistent, or semi-persistent, visual element, such as an eye, during an interaction with a user. For example, as shown below, to display a question mark, the visual element 1400, comprising a lighter circle indicative of an iris or reflection on the surface of the eye, may shift its position to the bottom of the question mark as the eye morphs or otherwise smoothly transitions into a question mark visual element 1400"' via intermediary visual elements 1400', 1400". The ability of the visual element to morph as described and illustrated results in high-readability. [00357] With reference to FIG. 15, there is illustrated an exemplary and non-limiting embodiment of an example whereby a visual element 1500, in instances where the eye is intended to morph into a shape that is too visually complex for the eye, may "blink" as illustrated to transition into the more visually complex shape 1500'. For example, as illustrated, the visual element of the eye 1500, "blinks" to reveal a temperature or other weather related variable shape 1500'.
[00358] With reference to Fig, 16, there is illustrated an exemplary and non-limiting embodiment of an example whereby a mouth symbol may be formed or burrowed out of the surface area of the eye visual element. In various embodiments, the color of the visual element may be altered to reinforce the displayed expression.
[00359] In accordance with various exemplary and non-limiting embodiments, the PCD 100 may have and exhibit "skills," as compared to applications that run on conventional mobile devices like smartphones and tablets. Just like applications that run on mobile platforms like iOS and Android, the PCD 100 may support the ability to deploy a wide variety of new skills. A PCD skill may comprise a JavaScript package, along with assets and configuration files that may invoke various JavaScript APIs, as well as feed information to an execution engine. As a result, both internal and external developers may be supported in developing new skills for the PCD 100.
[00360] As a fundamental principle, any new social robot skill is capable of being written entirely in Javascript that relates to a set of JavaScript APIs that comprise the core components of a software development kit (SDK) for developing new skills. However, to facilitate development, a set of tools, such as an expression tool suite and a behavior editor, may allow developers to create configuration files that feed into the execution engine, facilitating simpler and more rapid skill development as well as the use of previously developed skills.
[00361] With reference to FIG. 17, there is illustrated an exemplary and non-limiting embodiment of a platform for enabling a runtime skill for a PCD 100. As illustrated, various inputs 1700 are received which include, but are not limited to, imagery from a stereo RGB camera, a microphone array and touch sensitive sensors. Inputs 1700 may come via a touch screen. Inputs 1700 may form an input to sensory processing module 1702 at which processing is performed to extract information from and to categorize the input data. Inputs may come from devices or software applications external to the device, such as web applications, mobile applications, Internet of Things (IoT) devices, home automation devices, alarm systems, and the like. Examples of forms of processing that may be employed in sensory processing module include, but are not limited to, automated speech recognition (ASR), emotion detection, facial identification (ID), person or object tracking, beam forming, and touch identification. The results of the sensory processing may be forwarded as inputs to execution engine 1704. The execution engine 1704 may operate to apply a defined skill, optionally receiving additional inputs 1706 in the form of, for example, without limitation, one or more of an input grammar, a behavior tree, JavaScript, animations and speech/sounds. The execution engine 1704 may similarly receive inputs from a family member model 1708.
[00362] The execution engine 1704 may output data forming an input to expression module 1710 whereat the logical defined aspects of a skill are mapped to expressive elements of the PCD 100 including, but not limited to, animation (e.g., movement of various parts of the PCD), graphics (such as displayed on a screen, which may be a touchscreen, or movement of the eye described above), lighting, and speech or other sounds, each of which may be programmed in the expression module 1710 reflect a mode, state, mood, persona or the like of the PCD as described elsewhere in this disclosure. The expression module 1710 may output data and instructions to various hardware components 1712 of a PCD 100 to express the skill including, but not limited to, audio output, a display, lighting elements, and movement enabling motors. Outputs may include control signals or data to device or applications external to the PCD 100, such as IoT devices, web applications, mobile applications, or the like.
[00363] With reference to FIG. 18, there is illustrated an exemplary and non-limiting embodiment of a flow and various architectural components for a platform enabling development of a skill using the SDK. As illustrated, a logic level 1800 may communicate with a perceptual level 1802. Perceptual level 1802 may detect various events such as vision function events via vision function module 1804, an animation event via expression engine 1806 and a speech recognition event via speech recognizer 1806. Communication between logic level 1800 and perceptual level 1802 may serve to translate perceived events into expressed skills.
[00364] With this in mind, certain capabilities may be provided via a set of JavaScript APIs. First, JavaScript APIs may exist for various types of sensory input. JavaScript APIs may exist for various expression output. JavaScript APIs may also exist for the execution engine 1704, which in turn may invoke other existing JavaScript APIs. JavaScript APIs may exist for information stored within various models, such as a family member model 1708. The execution engine 1704 use any of these APIs, such as by extracting information via them for use in the execution engine 1704. In embodiments, developers who do not use the execution engine may directly access the family member model 1708. Among other things, the PCD 100 may learn, such as using machine learning, about information, behavioral patterns, preferences, use case patterns, and the like, such as to allow the PCD 100 to adapt and personalize itself to one or more users, to its environment, and to its patterns of usage. Such data and the results of such learning may be embodied in the family member model 1708 for the PCD 100.
[00365] Sensory input APIs may include a wide range of types, including automated speech recognition (ASR) APIs, voice input APIs, APIs for processing other sounds (e.g., for music recognition, detection of particular sound patterns and the like), APIs for handling ultrasound or sonar, APIs for processing electromagnetic energy (visible light, radio signals, microwaves, X-rays, infrared signals and the like), APIs for image processing, APIs for handling chemical signals (e.g., detection of smoke, carbon monoxide, scents, and the like) and many others. Sensory input APIs may be used to handle input directly from sensors of the PCD 100 or to handle sensor data collected and transmitted by other sensory input sources, such as sensor networks, sensors of IOT devices, and the like.
[00366] With respect to various sensory inputs, timestamps may be provided to allow merging of various disparate sensory input types. For example, timestamps may be provided with a speech recognizer to allow merging of recognized speech with other sensory input. ASR may be used to enroll various speakers. Overall, a speech tool suite may be provided for the speech interface of the PCD 100.
[00367] Also provided may be a variety of face tracking and people tracking APIs, touch APIs, emotional recognition APIs, expression output APIs, movement APIs, screen and eye graphics APIs, lighting APIs (e.g., for LED lights), sound and text to speech (TTS) APIs, and various others. Sound and TTS APIs may allow the PCD 100 to play audio files, speak words from a string of text, or the like. This may be either constant or the content of a string variable, an arbitrary amount of silence, or any arbitrary combination of them. For instance, a developer can specify a command such as: Speak ("beep.wav", NAME, ": SIL 3sec", "I am so happy to see you"), resulting in a beeping sound, speaking a particular name represented by populating NAME variable with an actual name, a silent period of three seconds, then the greeting. Text may be expressed in SSML (Speech Synthesis Markup Language). Simple text may be spoken according to conventional punctuation rules. In embodiments there may be expressive filters or sound effects overlaid or inserted into the spoken utterance.
[00368] The PCD SDK may include methods to upload content assets, like audio files, as well as to set properties of audio output, such as volume. The social robot may be configured to play various different formats, such as .wav, .mp3, and the like. Assets may be stored in various libraries, such as in the cloud or a local computing device. The PCD SDK may allow the PCD to search for assets, such as by searching the Internet, or one or more sites, for appropriate content, such as music, video, animations, or the like.
[00369] A set of family member and utility APIs may be provided that act as a front end to data stored remotely, such as in the cloud. These APIs may also include utilities that developers may want to use (such as logging, etc.).
[00370] A set of execution engine APIs may be provided to enable interface with the execution engine 1704. The execution engine 1704 may comprise an optional JavaScript component that can act on the configuration files created using several different tools, such as, without limitation, the Behavior Editor and the Expression Tool Suite. The execution engine may also multiplex data from the Family Member store, again making it easier for developers to write skills. In embodiments the Family Member store can also include hardware accessories to expand the physical capabilities of the PCD 100, such as projectors, a mobile base for the PCD 100, manipulators, speakers, and the like, as well as decorative elements that allow users to customize the appearance of the PCD 100.
[00371] One may follow a workflow to create a new PCD skill, commencing with asset creation and proceeding in turn to skill writing, simulation, testing and certification (such certification being provided in embodiments by a host enterprise that manages the methods and systems described herein).
[00372] With reference to FIG. 19, there is illustrated an exemplary and non-limiting embodiment of a user interface that may be provided for the creation of assets. Asset creation may involve creating the skill's assets. It may not necessarily be the first step, but is often an ongoing task in the flow of creating a skill, where assets get refined or expanded as the skill itself gets developed. The types of assets that may be created include animations, such as using a special tool within an expression tool suite to easily create new body and eye animations. Developers may also be able to repurpose body and eye animations in the "Developers" section of a PCD skills store. In embodiments developers may share their assets with consumers or other developers, such as on a skills store for the PCD 100 or other environment, such as a developer's portal. Assets may also include sounds, such that developers may create their own sounds using their favorite sound editor, as long as the resource is in an appropriate format with appropriately defined characteristics. Assets may include text-to-speech assets, leveraging a parametric TTS system, so that developers may create text-to-speech instances, and annotate these instances with various attributes (like "happy") that can modulate the speech.
[00373] Assets may include light visualizations, such as to control the LED lights on the PCD 100 (such as on the torso), in which case developers may use an expression tool suite to specify control. Note that developers can also repurpose LED light animations, such as from a "Developers" section of the PCD skills store as well.
[00374] Assets may include input grammars. In order to manage a skill's recognized input grammar, developers may use a speech tool suite to specify the various grammars they wish recognized.
[00375] Once a developer has the assets for a skill in order, the developer may write the skill itself using a behavior editor. The behavior editor enables the logic governing the handling of the sensory input, as well as the control of the expression output. While most of this step can be done using a straightforward editor, the SDK may enable the addition of straight JavaScript code to enable a developer to do things that might be unique to the particular skill, such as exchanging data with one or more proprietary REST APIs, or the like.
[00376] Once a skill is (partially) written, the developer may exercise various aspects of the skill using a PCD simulator, which may occur in real time or near real-time. The simulator may support the triggering of basic sensory input, and may also operate on a sensory input file created earlier via PCD's developer record mode. Inputs to the simulator may come from physical input to the PCD 100, from one or more sensors external to the PCD 100, directly from the simulator, or from external devices, such as IoT devices, or applications, such as web applications or mobile applications. The simulator will support parts of the Expression System via WebGL graphic output, as well as text to represent the TTS output. The development and simulation cycle can be in real time or near-real time, using a WYSrWYG approach, such that changes in a skill are immediately visible on the simulator and are responsive to dynamic editing in the simulator.
[00377] Ultimately, the developer may need to test the skill on the PCD 100 itself, since more complex behaviors (such as notifications) may not be supported within the simulator. In addition to adhoc live testing, the developer may again drive the testing via sensory input files created via the PCD's record mode. In embodiments inputs may be streamed in real time or near real time from an external source.
[00378] Also, if the developer wished to enable others to use and purchase the new skill, the developer may submit the skill, such as to the host of the SDK, for certification. Various certification guidelines may be created, such as to encourage consistency of behavior across different skills, to ensure safety, to ensure reliability, and the like. Once certified, the skill may be placed in the PCD store for access by users, other developers, and the like. In embodiments developers can also post assets (e.g., animations, skills, sounds, etc.) on a store for the PCD 100, a developer's portal, or the like. [00379] Various tools may be deployed in or in connection with the SDK. These may include a local perception space (LPS) visualization tool that allows a developer to see, understand and/or test the social robot's local perception space (e.g. for identification of a person, tracking a person, emotion detection, etc.). Tools may include various tools related to speech in a speech tool suite of utilities to create new grammars, and annotate the text-to-speech output. In embodiments, tools may be used to apply filters or other sounds or audio effects over a spoken utterance. Tools may include a behavior editor to allow developers to author behavior, such as through behavior trees (e.g. the "brain") for a given skill.
[00380] An expression tool suite may include a suite of utilities to author expressive output for the social robot, which may include an animation simulator that simulates animated behavior of the PCD 100. This may comprise HTML or JavaScript with a webkit and an interpreter, such as V8 JS Interpreter™ from Google™ underneath. Behaviors and screen graphics may be augmented using standard web application code.
[00381] A simulated runtime environment may be provided as a tool for exercising various aspects of a skill.
[00382] With reference to FIG. 20, there are illustrated exemplary and non-limiting screen shots of a local perception space (LPS) visualization tool that may allow a developer to see the local perception space of the PCD 100, such as seen through a camera of the PCD 100. This can be used to identify and track people within the view of the PCD 100. In embodiments this may grow in complexity and may comprise a three-dimensional world, with elements like avatars and other visual elements with which the PCD 100 may interact.
[00383] A speech tool suite may include tools related to hearing (e.g., an "ear" tool) and speaking. This may include various capabilities for importing phrases and various types of grammars (such as word spotting, statistical, etc.) from a library, such as yes/no grammars, sequences of digits, natural numbers, controls (continue, stop, pause), dates and times, non- phrase-spotting grammars, variables (e.g., $name), and the like. These may use ASR, speech- to-text capabilities, and the like and may be cloud-based or embedded on the PCD 100 itself. The tool suite may include basic verification and debugging of a grammar, with application logic, in the simulator noted above. A tool suite may include tools for developing NLU (natural language understanding) modes for the PCD 100. Resources may be created using an on- device grammar compilation tool. Resources may include tools for collecting data (e.g., like mechanical turk) and machine learning tools for training new models: such as for phrase spotting, person identification via voice, or other speech or sound recognition or understanding capabilities. Grammars may publish output tags for GUI presentation and logic debugging. A sensor library of the PCD 100 may be used to create sensory resources and to test grammar recognition performance. Testing may be performed for a whole skill, using actual spoken ASR. Phrase-spotting grammars may be created, tested and tuned.
[00384] In the behavior editor, when invoking the recognizer, a developer may modify a restricted set of a recognizer's parameters (e.g. timeout, rejection, etc.) and/or invoke callback on recognition results (such as to perform text processing).
[00385] With reference to FIG. 21 , a screenshot is provided of a behavior editor according to an exemplary and non-limiting embodiment. The PCD behavior editor 2100 may enable developers/designers to quickly create new skills on a PCD 100. The output file, defined in this section, drives the execution engine 1704. More details on the behavior editor 2100 are provided below.
[00386] In embodiments, the behavior authoring tool may comprise a behavior tree creator designed to be easy to use, unambiguous, extensible, and substantially WYSrWYG. The behaviors themselves may comprise living documentation. Each behavior may have a description and comment notation. A behavior may be defined without being implemented. This allows designers to "fill in" behaviors that don't yet exist.
[00387] The PCD behavioral system may be, at its core, made up of very low level simple behaviors. These low level behaviors may be combined to make more high-level complex behaviors. A higher-level behavior can either be hand coded, or be made up of other lower level behaviors. This hierarchy is virtually limitless. Although there are gradients of complexity, behavior hierarchies can be divided roughly into three levels: (1) atomic behaviors (the minimal set of behaviors to have a functioning behavior tree, generally including behaviors that are not necessarily dependent on the functions of the PCD 100); (2) PCD 100 based behaviors (behaviors that span the full capability set of the PCD 100, such as embodied in various JavaScript APIs associated with the social robot), (3) compound, high level behaviors (which may be either hand coded, or made up of parameterized behavior hierarchies themselves) and (4) skeleton behaviors (behaviors that are do not exist, are not fully implemented, or whose implementation is separate). Behavior hierarchies may be learned from the experience of the PCD 100, such as using machine learning methods such as reinforcement learning, among others. Each function call in the social robot API, such as embodied in a JavaScript API, may be represented as a behavior where it makes sense. A skeleton behavior can be inserted into a behavior tree for documentation purposes and implemented later and bound at runtime. This allows a designer who needs a behavior that does not yet exist to insert this "Bound Type" which includes a description and possible outcomes of this behavior (Fail, Succeed, etc.) and have an engineer code the implementation later. If, during playback, the bound type exists then that type is bound to the implementation; otherwise, the PCD 100, or the simulation, may speak the bound behavior name and its return type and continue on in the tree. The tools may also support the definition of perceptual hierarchies to develop sophisticated perceptual processing pipelines. Outputs of these perceptual trees may be connected to behaviors, and the like. In addition, the development platform and SDK support a suite of multi-modal libraries of higher-order perceptual classification modules (Reusable Multi-Modal Input-Output Modules) made available to developers.
[00388] At the most atomic, a behavior tree may be made of these elementary behaviors: BaseBehavior - a leaf node; BaseDecorator - a behavior decorator; Parallel - a compound node; Sequence (and sequence variations) - a compound node; Select - a compound node; and Random (and random variations) - a compound node. Atomic behaviors may be almost the raw function calls to the PCD JavaScript API, but wrapped as a behavior with appropriate timing. They span the entire API and may be very low level. Some examples include: LookAt; LoadCompileClip; and PlayCompiledClip. Compiled clips may have embedded events. A behavior or decorator can listen for an event of a certain type and execute logic at the exact moment of that event. This allows tight synchronization between expression output and higher- level decision making. Atomic behaviors may also include: PlayMp3; Listen; ListenTouch; and Blink (such with parameters relating to blinkSpeed, interruptPreviousBlink=(true|false).
[00389] Compound/High-level behaviors may be high level behaviors that combine other high level and/or low level behaviors. These behaviors may be parametrized. Examples may include: BeAttentive; TakeRandomPictures; BeHappy; and StreamCameraToScreen. Behaviors can be goal directed, such as to vary actions to achieve a desired outcome or state with the world. For example, in the case of object tracking, a goal may be to track an object and keep it within the visual field. More complex examples would be searching to find a particular person or varying the behavior of the PCD 100, such as to make a person smile. In embodiments, the mood or affective or emotive state of the PCD 100 can modify the behavior or style of behavior of the PCD 100. This may influence prioritization of goals or attention of the PCD. This may also influence what and how the PCD 100 learns from experience.
[00390] Readability of the behavior trees is important, especially when the trees become large . Take a simple case statement that branches the tree based on an utterance. The formal way to declare a case statement is to create a Select behavior that has children from which it will "select" one to execute. Each child is decorated with a FailOnCondition that contains the logic for "selecting" that behavior. While formal, it makes it difficult to automatically see why one element might be selected over another without inspecting the logic of each decorator. The description field, though, may be manually edited to provide more context, but there is not necessarily a formal relationship between the selection logic and the description field. With reference to FIG. 22, there is illustrated a formal way of creating branching logic according to an exemplary and non-limiting embodiment. Notice, the code of the first and second decorator 2200, 2202. FIG. 22 illustrates the formal relationship.
[00391] In the PCD 100, there are common branching patterns. A few of these include: grammar-based branching; touch-based branching; and vision-based branching.
[00392] For the most common branching, the behavior tool GUI may simplify the tree visualization and provide a formal relationship between the "description" and the logic. This may be achieved by adding to the behavior tree editor an "Info" column, which is auto- populated with a description derived by introspecting the underlying logic. The GUI tool may know that the specialized Select behavior called "GrammarSelect" is meant to be presented in a particular mode of the GUI. The underlying tree structure may be exactly the same as in FIG. 22, but it may be presented in a more readable way.
[00393] With reference to FIG. 23, there is illustrated an exemplary and non-limiting embodiment whereby select logic may be added as an argument to the behavior itself. In this case, the added argument may be a string field that corresponds to the grammar tag that is returned, and the value of that argument may be automatically placed in the "Info" field. The value of the added argument in each child behavior to GrammarSelect can be used to generate the correct code that populates the underlying SucceedElseFail decorator.
[00394] The "common pattern" for multimodal interaction is known, and it is an evolution of the common pattern for unimodal interaction (speech), which has been used in the past. This is true only in "sequential multimodality" (e.g. the two modes). However, robot behavior and human-machine interaction (HMI) have slightly different paradigms. While the first is more easily expressed by a behavior tree, the "nesting" structure of dialog lends itself better to nested "case" statements, or even more generally, to a representation involving a recursive directed graph with conditional arcs. So one may match the two with an enhancement to the GrammarSelect to increase readability of the HMI flow allowing for building sophisticated interactions.
[00395] Practically any human-machine interaction may happen in this way. First, a machine is configured to output something (in general something like animation+audio+texture), then the human inputs something (in general speech or touch) or some other process returns an event that is significant for the interaction, and the sequence iterates from there with additional outputs and inputs.
[00396] So, the case statement above (GrammarSelect) would cover that if one extended it to the full event paradigm and one could have a general HMI select, where one can specify the tag (which corresponds to an event) and the type of tag (grammar, vision, touch). So the above would be:
[00397] HMI InputSelect:
[00398] AnyBehaviorl Speech:RANDOMPICTURE, Touch: AREAl
[00399] AnyBeahvior2 Speech:PLAYMUSIC, Touch: AREA2
[00400] AnyBehavior3 Vision: TRACKINGFACELOST
[00401] The tags separated by commas are in OR. In this example the behavior would respond with AnyBehaviorl to someone saying "take random pictures" OR touching AREAl, Behavior 2 to someone saying "Play Music", or Touching Area 2, or with Behavior3 if the vision system returns a TRACKINGFACELOST.
[00402] Another way to improve readability of the HMI flow is to explicitly see the text of the prompts in the behavior tree specification view, by introducing a basic behavior called, for example, "Speak". So, referring to the above example, if someone says RANDOMPICTURE, then one enters into AnyBehaviorl Sequence: AnyBehaviorl .
[00403] The PCD 100 speaks: "OK, I am going to take a picture of you now. Ready?"
[00404] The user returns a "Yes," resulting in processing of either Behavior Speech:YES or Touch:YESAREA.
[00405] Then the PCD 100 initiates a sequence, such as a TakePictureBehavior. I
[00406] If the PCD 100 detects a "no," such as hearing a NoBehavior Speech:NO or sensing a Touch:NOAREA, then the user executes a GoHomeBehavior and initiates a speech behavior: robotSpeak "OK. Going back to home screen".
[00407] In this case, the PCD Speak is a basic behavior that randomizes a number of prompts and the corresponding animations (in embodiments, one can see the prompts and the animations if one double clicks the behavior, and the behavior editing box will pop up). It is important to have typing of this behavior, because the UI design can write the prompt while a developer is designing the application. Then one can automatically mine the behavior tree for all the prompts and create a manifest table for the voice talent, automatically create files names for the prompts, etc. (that alone will save a lot of design and skill-development time). [00408] The way interaction behavior is expressed in the example above, a developer can quickly understand what's going to occur, so as this will represent at the same time the design and the implementation.
[00409] One thing to notice, regarding using indented trees to represent interactions, is that if the interaction is deep (such as having many nested turns), one quickly runs out of horizontal real estate. So, a designer may make the habit of encapsulating subsequent turns into behaviors that are defined elsewhere. Another problem that affects readability is that the exit condition is not clear in nested statements. In a directed graph representation one can put an arc at any point that goes wherever wanted, and it is perfectly readable. In a nested procedure one may generate a condition that causes the procedure to exit, as well as the other calling procedures.
[00410] The main window of the behavior editor may be a tree structure that is expandable and collapsible. This represents the tree structure of the behaviors. For each behavior in this view one can, in embodiments, drag, drop, delete, copy, cut, paste, swap with another behavior, add or remove one or more decorations, add a sibling above or below and add a child (and apply any of the above to the sibling or child).
[00411] This top level view should be informative enough that an author can get a good idea of what the tree is trying to do. This means that every row may contain the behavior and decorator names, a small icon to represent the behavior type, and a user-filled description field.
[00412] Each behavior may be parameterized with zero or more parameters. For example a SimplePlay Animation behavior might take one parameter: the animation name. More complex behaviors will typically take more parameters.
[00413] A compound behavior may be created in the behavior tool as sub behaviors. In embodiments, one may arbitrarily parameterize subtree parameters and bubble them up to the top of the compound behavior graphically.
[00414] Each parameter to a behavior may have a "type" associated with it. The type of the parameter may allow the behavior authoring tool to help the user as much as possible to graphically enter valid values for each argument. The following is an embodiment of a type inheritance structure with descriptions on how the tool will graphically help a user fill in an appropriate value: (1) CompiledClip: Editing a compiled clip may take a developer to the Animation Editor, which may be a timeline based editor; (2) String: A text box appears; (3) File: a file chooser appears: (4) Animation File: A file chooser window appears that lists available animations, which may include user generated animations and PCD-created animations. It may also display a link to the animation authoring tool to create an animation on the spot; (5) Sound File: A file chooser may appear that lists available mp3 files; (6) Grammar File: A file chooser that lists available .raw or .grammar files; (7) Grammar Text: shows a grammar syntax editor with autocomplete and syntax highlighting; (8) TTS: a TTS editor appears, possibly in preview mode; (9) JavaScript: Shows a JavaScript editor, such as Atom, with syntax highlighting and possible code completion for The social robot APIs; (10) Environment Variables: These are variables that are important to the PCD 100; (11) Number: A number box appears. Min Max, default; (12) Integer: An integer select box appears. Min Max, default; (13) Boolean: A true/false combo box or radio select buttons appears; (14) Array<Type>: Displays the ability to add, subtracts, move up or down elements of type; (15) Vector3d: Displays an (x, y, z) box; and (16) Person: May be nearest, farthest, most well- known, etc.
[00415] As the PCD 100 runs a behavior tree, a debug web interface may show a graphical representation of the tree, highlighting the current node that it is on. Start, stop, and advance buttons may be available. During pause, the tool may allow introspection on global watch variables and behavior parameter values. Furthermore, limited input interaction may remain available. This may include triggering a phrase or placing a person near the social robot, which may be able to add template knowledge about this person, for example. In embodiments developers may also share behavior models with other developers, such as sharing sensory- motor skills or modules. For example, if the PCD 100 has a mobile base, navigation and mapping models may be shared among developers. The behavior logic classes may be modified by developers, such as to expand and provide variants on functionality.
[00416] The tools of the SDK may include an expression tool suite for managing expressions of the social robot. A core feature of the Expression Tool Suite is the simulation window. With reference to FIG. 24, there is illustrated an embodiment of a simulation window where the main view in both screenshots simulates the animation of the PCD 100. The top main view 2400 also simulates the focal point for the eye graphic. The upper left portion in each screenshot simulates the screen graphic 2402, 2402'. This simulation view may be written in WebGL, such that no special tools are required to simulate the social robot animation (other than having a current version of a browser, such as Chrome™, running). This simulation view need not be a separate tool unto itself; instead, it may be a view that can be embedded in tools that will enable the host of the PCD platform and other developers to create and test PCD animations, such as animations of various skills. It may either be invoked when a developer wants to play back a movement or animation in real time or by "stepping through" the animation sequentially. Thus, provided herein is a simulation tool for simulating behavior of social robot, where the same code may be used for the simulation and for the actual running of the social robot. [00417] With reference to FIG. 25, there is illustrated an exemplary and non-limiting embodiment of a social robot animation editor of a social robot expression tool suite. With such a tool, a developer may piece together social robot animations, comprised of one or more social robot movements, screen graphics, sounds, text-to-speech actions, and lighting, such as LED body lighting and functionality. FIG. 25 shows a conventional animation editor 2500 of the type that may be adapted for use with the PCD 100. Key features of the animation editor may include a simulation window 2502 for playing back social robot animations, an animation editor 2504 where a developer/designer may place assets (movements, graphics, sound/TTS, LED body lighting, or complete animations) into a timeline, and an assets library 2506, where a developer/designer can pick existing assets for inclusion in the timeline. Assets may come from either the developer's hard drive, or from the PCD store. This may support 3D viewing for altering the view, scale, rotation, or the like of the PCD 100. In embodiments, the editor may allow for use of backgrounds or objects that may expand the virtual environment of the PCD, such as having avatars for simulating people, receiving inputs from a user interface, and the like. In embodiments the animation editor may have a mode that inverses controls and allows users to pose the robot and have an interface for setting keyframes based on that pose. In a similar manner, animating screen-based elements like an eye, overlay or background element may be done by touch manipulation, followed by keyframing of the new orientation/changes. Variants of this approach may also be embodied, such as using the PCD 100 to record custom sound effects for animations (placeholder or final) would greatly speed up the creative process of design skills. In embodiments the tool may allow previewing animations via the animation editor directly on the PCD 100 to which the editor is connected.
[00418] In embodiments, the host of the PCD platform may support the ability to import assets and create new assets. "Import" and "create" capabilities may support the various asset types, described herein. For example, creating a new movement may launch the social robot animation movement tool, while creating new TTS phrases launches the social robot's speaking tool.
[00419] Creating new LED lighting schemes may be specified via a dialog box or a lighting tool.
[00420] In embodiments, one or more tools may be embodied as a web application, such as a Chrome™ web application. In embodiments, the given tool may save both the social robot animation itself, such as in a unique file type, such as a jba or .anim file, as well as a being saved as a social robot animation project file, such as of a jbp file type. This approach may be extensible to new tools as the PCD 100 evolves with new capabilities, such as perceptual capabilities, physical capabilities, expressive capabilities, connectivity with new devices (e.g., augmented reality devices), and the like.
[00421] With reference to FIG. 26, there is illustrated an exemplary and non-limiting embodiment of a PCD animation editor 2500 that may be used, such as by invoking "New... Animation" from the PCD animation editor 2500. At its core, there are radian positions that specify body positions (such as, in a three part robot, by controlling the radial positions bottom, middle, and top sections of the robot). In FIG. 26, a set of sliders 2602 may be used to provide movement positions. In embodiments, each set of positions may also be time- stamped, such that a complete movement is defined by an array of time/body-position values. The remaining sliders may be used for controlling the joints in the eye animation. In embodiments, one may separate creating new eye animations from creating new body animations (the two are conflated in this embodiment). Finally, the tool may also support the importing of a texture file to control the look of the eye graphic. The tool may support simulating interaction with a touch screen. In embodiments, the tool may enable various graphics beyond the eye, such as interactive story animations.
[00422] The PCD simulator may not only include the above-referenced simulation window, but also may have an interface/console for injecting sensory Input.
[00423] In embodiments, a key based access to a web portal associated with a PCD 100 may allow a developer to install skills on the social robot for development and testing. The web portal on the PCD 100 may provide a collection of web-based development, debugging and visualization tools for runtime debugging of the skills of the PCD 100 while a user continues to interact with the PCD 100.
[00424] The PCD 100 may have an associated remote storage facility, such as a PCD cloud, which may comprise a set of hosted, web-based tools and storage capabilities that support content creation for animation of graphics, body movement, sound and expression. In embodiments, the PCD 100 may have other off-board processing, such as speech recognition machine learning, navigation, and the like. This may include web-based tools for creation of behavior trees for the logic of skills using behavior tree libraries, as well as a library of "plug- in" content to enhance developer skills, such as common emotive animations, graphics and sounds. The interface may be extensible to interface with other APIs, such as home automation APIs and the like.
[00425] The methods and systems disclosed herein may address various security considerations. For example, skills may require authorization tokens to access sensitive platform resources such as video and audio input streams. Skills may be released as digitally signed "packages" through the social robot store and may be verified during installation. Developers may get an individual package, with applicable keys, as part of the SDK.
[00426] In embodiments, the PCD SDK may include components that may be accessed by a simple browser, such as a Chrome™ browser, with support for conventional web development tools, such as HTML5, CSS, JS and WebGL, as well as a canvas for visualization. In embodiments, an open source version of a browser such as Chrome™ may be used to build desktop applications and be used for the simulator, development environment and related plugins, as well as being used for the PCD 100 application runtime. This means code for the PCD 100, whether for development, simulation or runtime usage can typically run in regular browsers with minimal revision, such as to allow skills to be previewed on mobile or PC browsers.
[00427] The SDK described herein may support various asset types, such as input grammars (such as containing pre-tuned word-spotting grammars), graphics resources (such as popular graphics resources for displaying on the screen of the social robot); sounds (such as popular sound resources for playing on speakers of the PCD 100, sculpting prosody of an utterance of the PCD 100, adding filters to the voice, and other sound effects); animations (such as popular bundles of movement, screen graphics, sound, and speech packaged into coordinated animations); and behavior trees (such as popular behavior tree examples that developers can incorporate into skills).
[00428] The PCD SDK may enable managing a wide range of sensory input and control capabilities, such as capabilities relating to the local perceptual space (such as real time 3D person tracking, person identification through voice and/or facial recognition and facial emotion estimation); imaging (such as snapping photos, overlaying images, and compressing image streams); audio input (such as locating audio sources, selecting direction of an audio beam, and compressing an audio stream); speech recognition (such as speaker identification, recognition of phrases and use of phrase-spotting grammars, name recognition, standard speech recognition, and use of custom phrase-spotting grammars); touch (such as detecting the touching of a face on a graphic element and detecting touches to the head of the social robot); and control (such as using a simplified IFTTT, complex behavior trees with JavaScript or built- in behavior libraries).
[00429] The PCD SDK may also have various capabilities relating to the output of expressions and sharing, such as relating to movement (such as playing social-robot-created animations, authoring custom animations, importing custom animations and programmatic and kinematic animation construction); sound (such as playing social robot-created sounds, importing custom sounds, playing custom sounds, and mixing (such as in real time) or blending sounds); speech output (such as playing back pre-recorded voice segments, supporting correct name pronunciation, playing back text using text-to-speech, incorporating custom pre-recorded voice segments and using text-to-speech emotional annotations); lighting (such as controlling LED lights); graphics (such as executing social robot-created graphics or importing custom graphics); sharing a personalization or skill (such as running on devices within a single account, sharing with other developers on other devices, and distributing to a skills store).
[00430] In accordance with various exemplary and non-limiting embodiments, methods and systems are provided for using a PCD 100 to coordinate a live performance of Internet of Things (IOT) devices.
[00431] In some embodiments, a PCD 100 may automatically discover types and locations of IOT devices including speakers, lights, etc. The PCD 100 may then control lights and speakers to enhance a live musical performance. The PCD 100 may also learn from experience what preferences of the users are, such as to personalize settings and behaviors of external devices, such as music devices, IOT devices and the like.
[00432] As inexpensive IOT devices become common, it will be possible to utilize them in entertaining ways. A PCD 100, with spatial mapping, object detection, and audio detection is ideally equipped to control these devices in coordination with music, video and other entertainment media. A well-orchestrated performance will delight its audience.
[00433] Commercial solutions exist to automatically control sound and lighting to enhance theatrical and live music performances. Similar systems are also used to enhance Karaoke performances. The problem with existing commercial systems is that they are expensive and require expertise to correctly configure sound and lighting devices. Controllable devices are generally designed specifically for theater or auditorium environments. These systems and devices are not found in homes.
[00434] Provided herein is an appropriately programmed PCD 100 that can (1) automatically discover types and locations of IOT devices including lights, speakers, etc. and (2) control these lights, speakers, etc., such as to enhance a live musical performance.
[00435] Consider a family with a home in which IOT lights and speakers have been installed in, say, the kitchen and adjacent family room. This family, being adopters of new technology, may purchase a personal PCD 100 that may be deployed in the kitchen. As part of its setup procedure, the social robot may discover the types and locations of the family's IOT devices and request permission to access and control them. If permission is granted, the PCD 100 may offer to perform a popular song. The social robot then uses its own sound system and expressive physical animation to begin the performance. Then, to the delight of the family, the IOT lights in the kitchen and family room begin to pulse along with the music, accentuating musical events. Then the IOT speakers begin playing, enhancing the stereo/spatial nature of the music.
[00436] The ability to coordinate IOT devices with a music (or other) performance enhances the perceived value of the PCD 100. It could also make the PCD 100 valuable in automatically setting up and enhancing ad hoc live performances outside the home.
[00437] Provided herein are methods and systems for using a PCD 100 to moderate a meeting or conversation between human participants. In such embodiments, a properly designed PCD 100 can be employed as a meeting moderator in order to improve the dynamic and the effectiveness of meetings and conversations.
[00438] Meetings are often not as effective as intended, and individuals who can skillfully moderate meetings are not always available. Successful attempts to address the factors that contribute to suboptimal meetings generally take the form of specialized training sessions or the utilization of expert moderators. These approaches can be effective, but they are expensive.
[00439] Attempts by untrained individuals to moderate meetings often fail because individuals are resistant to instruction and advice offered by peers.
[00440] Often, the goal of a meeting or a conversation is to discuss ideas and opinions as the participants in the course of the meeting contribute them. Often, the expectation is that participants will have the opportunity to contribute freely. Given these goals and expectations, an optimal meeting or conversation is one in which valuable and relevant contributions are made by all participants and all important ideas and opinions are contributed.
[00441] A number of human factors can limit the success of a meeting. For example, individuals are not always committed to the goals and expectations of the meeting. Also, the dynamic between individuals does not always align with the goals and expectations of the meeting. Sometimes the intent of a meeting's participants is explicitly counter to the goals of the meeting. For example, a meeting intended to catalyze a mutual discussion may be hijacked by a participant whose goal is to steer the discussion in a certain direction. In other cases, the dynamic between individuals may be hostile, causing the discussion to focus on the dynamic rather than the intended subject. Unintentional disruption can also minimize the success of a meeting. For example, a talkative, expressive participant can inadvertently monopolize the discussion, preventing others from contributing freely.
[00442] Because of these limiting factors, many (if not most) meetings are sub-optimal. In a business setting, suboptimal, inefficient meetings can be an expensive waste of resources. In a family, suboptimal conversations can be an unfortunate missed opportunity. [00443] The problem, as stated above, is the result of innate human tendencies, and it persists because very little is done to address and correct it. During the typical education of individuals, significant time is spent on instruction for reading, writing, arithmetic, science, art, music, business, etc. But little or no explicit instruction is provided for important skills like conversation, collaboration or persuasion (rhetoric). Because of this, there is an opportunity to significantly improve the effectiveness of collaboration, in general, and meetings, in particular.
[00444] Research reveals that humans are more willing to receive and follow instruction and advice from a social robot than from another human. A social robot can act as an impartial, non-judgmental, expert moderator for meetings. The PCD's biometric recognition capability can allow it to accurately track and measure the degree of participation by each individual in a meeting. This information can be presented as a real time histogram of participation. The histogram can include: talk time per individual; back and forth between individuals; tone (positive/negative) projected by each individual; politeness; idiomatic expressions (positive and negative, encouraging and derogatory, insensitivity); cultural faux pas; emotional state of individuals (affective analysis); overall energy over time; and topics and subtopics discussed.
[00445] Throughout the course of a meeting, a PCD 100 can transcribe the verbal content and correlate it with social measurements to provide an objective tool for both capturing the discussion and evaluating the effectiveness of the meeting.
[00446] The PCD 100 can be configured with relevant thresholds so that it can interject during the meeting in order to keep the meeting on track. For example, the robot can interject when: someone is talking too much; the tone is too negative; inappropriate idiomatic expressions are used; insensitivity is detected; the overall energy is too low; and/or essential topics are not addressed.
[00447] In its capacity as both an impartial meeting moderator and a social mirror, the PCD 100 can help participants accomplish two important goals: conduct meetings more effectively and learning to collaborate and converse more effectively.
[00448] A meeting, for example, is an environment in which may be deployed a technology. Meeting participants may include experts from a variety of disciplines with a variety of communication styles. In the case where the meeting is dominated by a talkative participant, the PCD moderator can (in a non-judgmental way) present a real-time histogram - displayed on an appropriate display - that shows the relative talk time of all participants. Additionally, if inappropriate expressions are used, the social robot can (without judgment) attribute these expressions to the contributing participants, such as via a histogram. The energy and tone of the meeting can also be measured and tracked in real time and compared to previous, effective meetings. As a learning opportunity, both effective and ineffective meetings can be compared using the statistics gathered by the PCD 100.
[00449] Thus, a social robot such as a PCD 100 may act as a moderator of meetings, recording and displaying relevant information, and improving the effectiveness and dynamics of meetings, which can translate into increased productivity and a better use of resources.
[00450] Also provided herein are methods and systems for organizing a network of robot agents to distribute information among authenticated human identities and networked mobile devices.
[00451] As the number and variety of communication channels increases, so does the "noise" with which message senders and recipients must contend. Additionally, new channels often specialize in a particular mode of message delivery. The result is that a message sender must decide which channel to use to maximize the likelihood and effectiveness of message delivery. Likewise the message recipient must decide which channel(s) to "watch" in order to receive messages in a timely manner. These decisions are increasingly difficult to make.
[00452] Today, messages from multiple email accounts may be automatically consolidated by mail-reading programs, making it possible to simultaneously monitor multiple email channels. Likewise, mobile devices may present text messages from multiple channels in a consolidated manner. However, message consolidation does not solve the problem of "noise." It may make the problem worse by bombarding the recipient with messages that are all presented in the same mode.
[00453] Social robots can play a unique role in message communication, because of their ability to command attention and because of the importance that humans assign to human-like communication. When a social robot is used as the channel for delivering a message to a recipient, the delivery mode can be chosen automatically by the social robot, so that the message receives an optimal degree of attention by the recipient.
[00454] This may be accomplished using several characteristics unique to social robots: (1) the physical presence of the social robot allows it to attract attention with expressive cues to which humans are innately attuned, i.e. motion, gaze direction, "body language"; (2) a social robot with biometric recognition capability can detect when the intended recipient of a message is physically present and can prompt that recipient with the most effective physical cues; and (3) the learning algorithms employed by a social robot can use the message content, situational context, and behavior history of the recipient to make an optimal decision about how to effectively deliver a message. [00455] Networked Social Robots such as a PCD 100, as well as other devices, such as mobile devices and other network-connected devices, may be used in the methods and systems disclosed herein. The message-delivery advantages afforded by an individual social robot are amplified when multiple, networked social robots are robots are employed. In a household setting, a number of PCDs - distributed among rooms/zones of a house - can coordinate their message-delivery efforts. The physical presence of multiple PCDs throughout the household increases the window during which messages can delivered by the robots. The network of PCDs can use their shared biometric recognition capabilities to track the whereabouts of intended recipients throughout the household. The learning algorithms employed by the network of PCDs can generate predictive models about recipient movement and behavior to determine which PCD agent can most effectively deliver the message.
[00456] This same dynamic can be applied in any physical location and can be applied to businesses, museums, libraries, etc.
[00457] The physical forms of robots in a network of PCDs may vary. The network may consist of PCDs that are stationary, mobile, ambulatory, able to roll, able to fly, embedded in the dashboard of a vehicle, embedded in an appliance like a refrigerator, etc.
[00458] In addition, the PCD's "brain" (its software, logic, learning algorithms, memory, etc.) can be replicated across a variety of devices, some of which have physically expressive bodies, and some of which do not - as in the case where the PCD 100 software is embodied in a mobile phone or tablet (replicated to a mobile device).
[00459] When a PCD's software is replicated to mobile device, that device can act as a fully cooperative, fully aware member of a social robot network, as well as with human beings in a social and/or technical network. The degree to which a physically constrained PCD instance can contribute to the task of delivering messages depends on the functionality that it does possess, i.e. PCD software embodied in a typical smartphone will often be able to provide biometric recognition, camera surveillance, speech recognition, and even simulated physical expression by means of on-screen rendering.
[00460] A smartphone constrained PCD instance may generally be able to contribute fully formed messages that can then be delivered by other unconstrained PCDs within the network.
[00461] In a network of PCD instances, each instance can operate as a fully independent contributor. However, any given instance can also act as a remote interface (remote control) to another PCD instance on the network. This remote interface mode can be active intermittently, or an instance can be permanently configured to act as the remote interface to another instance - as in the case where PCD software is embodied in a smartphone or smartwatch for the specific purpose of providing remote access to an unconstrained instance.
[00462] In embodiments, in a family home setting, a message may be created by a parent using an unconstrained (full-featured) robot unit in the kitchen. The parent may create the message by speaking with the PCD 100.
[00463] The message may be captured as an audio/video recording and as a text transcript, such as from a speech-to-text technology, and delivered via text-to-speech (TTS). Delivery is scheduled some time in the future, such as after school today. The intended recipient, Teenager, may not be currently at home, but may arrive at the intended delivery time. In this example, the Teenager does come home after school, but does not enter the kitchen. A tablet- embodied robot unit - embedded in the wall by the garage entrance - may recognize the teenager as she arrives. Because the tablet-embodied unit is networked with the kitchen robot unit, the upstairs robot unit, and the teenager's iPod-embodied unit, all four units cooperate to deliver the timely message. For this kind of message, the preferred delivery mode is via an unconstrained robot unit, so the tablet unit only mentions that a message is waiting. "Hi, [teenager], you have a message waiting." The teenager might proceed to her room, bypassing the kitchen and upstairs robot units. When the delivery time arrives, the network of robot units can determine that because the teenager is not in proximity to an unconstrained robot unit, the next best way to deliver the message is via teenager's iPod-embodied unit. As a result, the iPod unit sounds an alert tone and delivers the message: "Hey, [teenager]. There is a brownie waiting for you in the kitchen." When the teenager finally does enter the kitchen, the kitchen robot unit is already aware that the message was delivered and only offers a courtesy reminder: "Hi, [teenager]. If you ready for that brownie, it's in the toaster oven." The PCD 100 may also summarize the content of the message, and who it is from, such as "Carol, Jim left a message for you. Something about picking up the kids from soccer today." This may help Carol decide when to listen to the message (immediately, or somewhat later).
[00464] Thus, a network of social robots can use biometric recognition, tracking, physical presence (such as based on a link between the PCD 100 and an associated mobile device), nonverbal and/or social cues, and active prompting to deliver messages that would otherwise be lost in the noise of multiple, crowded message channels.
[00465] In other embodiments, listening to TV or playing video games loudly that are played loudly can be highly annoying to others in the vicinity with different tastes in what makes audio pleasing. Additionally many families have members who stay up later than others. [00466] A proposed solution is to support a way for listeners to use headphones receiving audio wirelessly from a social robot so only the listener can hear him and they are free to listen as loudly as they desire with no compromise. Variants may include Bluetooth headphones, a headphones bundle, a mobile receiver with wired headphones (such as using local Wi-Fi or Bluetooth), and the like.
[00467] In accordance with exemplary and non-limiting embodiments, a PCD 100 may have Reminder capabilities similar to those in personal assistant's on popular smartphones. Example: "At 3pm on December 5th, remind me to buy an anniversary gift" "OK, I'll remind you". Reminders can be recurring to support things like medication reminders. Users may have the option to create the reminder as an audio or video recording, in which case the PCD 100 may need to prompt at the beginning of recording. The PCD 100 may summarize after the message has been created: For example, "OK, I'm going to remind John tomorrow when I see him [play audio] ." A reminder is just a special form of PCD Jot where a time is specified.
[00468] The PCD 100 may be able to remind known people (one or more for the same reminder) in the family about things. For example, "When you see Suzie, remind her to do her homework" or "At 6pm, remind Dad and Mom to pick me up from soccer practice." If a reminder is given, the originator of the reminder should be notified on the social robot PCD link if he or she has a social robotLink device. A reminder is just a special form of the PCD Jot where a time is specified. In embodiments, a link may between a PCD 100 and a mobile device.
[00469] If the PCD 100 isn't able to deliver a reminder because the target person isn't there, the reminder may appear on the target's social robotLink device(s). If there is no social robotLink device assigned to the target, the PCD 100 may display message as soon as it sees the target person.
[00470] In accordance with exemplary and non-limiting embodiments, the PCD 100 may be able to send short text messages or audio/visual recordings to other PCD's in its directory, referred to herein as "Jots." The PCD Jot messages may be editable, and the PCD Jot recordings may be able to play back and re-record before sending. The PCD 100 may confirm for senders that the PCD Jot was successfully sent. The PCD 100 may maintain a "sent" Jots folder for each member of the household, which can be browsed and deleted message by message. Sent Jots may be viewable and/or editable on PCD Link or the PCD 100.
[00471] The PCD may maintain a list of PCD animations, referred to herein as "robotticons," akin to emojis used in screen-based devices, such as to give life to or enhance the liveliness of messages. Examples may include a cute wink for "hello" or "oO" for "uh-oh". The social robotticons can be elaborate, and certain specialized libraries may be available for purchase on the PCD Skills Store. Some PCD robotticons may be standalone animation expressions. Others may accommodate integration of a user video image/message. The PCD robotticons may include any of the PCD's expressive capabilities (LED, bipity boops, or other sounds or sound effects, animation, etc.)
[00472] If a user elects to send a photo, such as captured by a "snap" mode of the PCT, the PCD Jot capabilities may be available to append to the photo.
[00473] For example, a family member may always ask the PCD 100 "play me my reminders [from [person]]" and the PCD 100 may respond by beginning playing from the earliest reminders for that person. The PCD's screen may signify that there are reminders waiting. If the PCD sees the intended recipient of a PCD Jot, the PCD 100 may offer to play the Jot if the reminder hadn't been viewed within the last six hours, and the time of the reminder has now arrived. After viewing a message, the recipient may have an option to reply or forward, and then save or delete the message, or "snooze" and have the message replayed after a user defined time interval. Default action may be to save messages. The PCD may maintain an inbox of the PCD Jots for each member of the household that may be scrolled.
[00474] In the event there are multiple family members, an incoming PCD Jot may carry with it an identifier of the intended recipient. The PCD 100 may only show messages to the intended recipient or other authorized users. For example, each member of the family may have their own color, and a flashing "message" indicator in that color let's that family member know the message is for them. The paradigm should accommodate instances where there are different messages awaiting different members of the family. Whether a family member is authorized to view another family member's message may be configurable via Administrator.
[00475] The PCD 100 may be able to create to-do lists and shopping lists, which may be viewable and editable on the PCD Link. For example, users may be able to say "PCD, I need to sign Jenny up for summer camp" and the PCD 100 may respond "I've added 'sign Jenny up for summer camp' to your to-do list." Or "PCD, add butter to my shopping list." Lists may be able to be created for each family member or for the family at large. Each member of the family may have a list, and there may be a family list.
[00476] The PCD Jot may time out after a period of non-use.
[00477] The PCD may have a persistent "Be" state that engages in socially and character- based (emotive, persona model-driven behaviors) interactions, decisions, leanings with users. This state may modulate the PCD skills, personalize the PCD behavior and performance of these skills to specific users based on experience and other inputs. [00478] The PCD 100 may have a single, distinct "powered off pose, as well as some different animation sequences that lead it to that pose when it is turned off. The PCD 100 may have a single, distinct "Asleep" pose when it is plugged in or running on battery power as well as a number of different animation sequences that lead it to that pose after it gets a "sleep" command or if it decides to take a nap while disengaged. The PCD 100 may have several different animations corresponding to "wake up" verbal or tactile commands or other audiovisual events or turning the power on/connecting a power source when it has been asleep or off for <=48 hours. In embodiments there can be distinct sleep modes, such as one where the PCD 100 is waiting but still has active microphones and cameras to wake up when appropriate. In another sleep mode (which may be indicated by some cue, such as an LED indicator), the PCD 100 may have microphones and camera off, so that the PCD 100 does not see or hear when asleep in this mode. In the latter mode, a person may need to touch the robot or use a different modality than speech or visual input to wake up the PCD 100.
[00479] The PCD 100 may have several different animations corresponding to verbal or tactile "wake up" commands or other audiovisual events turning the power on/connecting a power source when it has been asleep or off for >=48 hours.
[00480] The PCD 100 may have several wake up animations corresponding to verbal or tactile "wake up" commands or turning the power on after more than 3 hours asleep or off between 11pm and 1 lam local time, for example.
[00481] The PCD 100 may have several different ways of "dreaming" while it is asleep. These
Dreaming States may occur during -30% of sleep sessions that last longer than 15 minutes.
The PCD's dreams can be interrupted so that it goes into a silent sleep state with commands, or by touch screen, in the event people in the room find its dreams distracting.
[00482] The PCD 100 may notify users verbally and on-screen when its power level is below
20%, and at each decrement of approximately 5% thereafter, for example.
[00483] The PCD 100 may notify users on-screen when its power source is switched between outlet and battery. It should also be able to respond to questions such as "Are you plugged in?" or "Are you using your battery?" The PCD 100 may automatically power on or off when the button on the back of his head is pushed and held. A short button push puts the social robot to sleep.
[00484] The PCD 100 may be set to wake up from sleep via (voice or touch) or just touch. If the PCD 100 is on but not engaged in active interaction (i.e., in a base stated referred to herein as the "Be" or "being" state), the PCD 100 may exhibit passive awareness animations when someone enters its line or sight or makes a noise. These animations may lead to idling active awareness if the PCD 100 believes the person wants to engage.
[00485] If the PCD 100 is passively aware of someone and believes that person wants to actively engage either because of a verbal command or because that person is deliberately walking toward the PCD 100, it may exhibit "at your service" type active awareness animations.
[00486] The PCD 100 may comment if it can't see because a foreign object is covering his eyes if it is asked to do anything that requires sight. If the PCD 100 is tapped on the head independent of any kind of prompt, it may revert to Idling Active Awareness. In other embodiments, if the PCD 100 is stroked or petted, or if it is praised verbally, it may exhibit a "delight" animation, and revert to Idling Active Awareness.
[00487] If a recognized member of the PCD's family is in line of sight or identified, such as via a voice ID, the PCD 100 may generally greet that family member in a personal way, though not necessarily verbally (which may depend on the recency of a last sighting of that family member).
[00488] If a stranger is in line of sight or detected via voice, the PCD may go into passive awareness mode. If it detects interest from the stranger, it should introduce itself without being repetitive. The PCD 100 may not proactively ask who the other person is since the "known family members" are managed by the PCD's family Administrator.
[00489] If a recognized member of the PCD's family is with an unrecognized stranger, the PCD 100 first greet the family member personally. If that family member introduces the PCD 100 to the stranger, the PCD 100 may not proactively ask who the other person is since the "known family members" are managed by the social robot's family Administrator.
[00490] If the social robot's family Administrator introduces the social robot to meet a new person and the Administrator proactively says he should remember the new person, the social robot should take up one of the 16 ID slots. If there are no available ID slots, the PCD 100 may ask the Administrator if he or she would like to replace an existing recognized person.
[00491] When asked to learn a new person, the PCD 100 collects the necessary visual and audio data, and may also suggest that the Administrator have the new person go through the PCD Link app to more optimally capture visual and audio samples, and learn name pronunciation.
[00492] In some embodiments, the PCD 100 may have several forms of greetings based on the time of day. For example, "Good Morning" or "Good evening" or "You're up late." If the PCD 100 knows the person it is greeting, it may frequently, but not always, be personalized with that person's name. [00493] If someone says goodbye to the PCD 100, it may have several ways of bidding farewell. If the PCD 100 knows the person saying goodbye, it may personalize the farewell with that person's name.
[00494] The PCD 100 may have some idle chatter capabilities constructed in such a way that they don't encourage unconstrained dialog. These may include utterances that aim for a user response, or simple quips designed to amuse the user but not beckoning a response. These utterances may refer to known "Family Facts" as defined in the Family Facts tab, such as wishing someone in the family "happy birthday". In embodiments, visual hints may be displayed on a screen as to what utterances the PCD 100 is expecting to hear, such as to prompt the user of the PCD 100. Utterances may also be geocentric based on a particular PCD's zip code. Utterances may also be topical as pushed from the PCD Cloud by the design team such as "I can't believe Birdman swept the Academy Awards! ". Quips may be humorous, clever, and consistent with the PCD's persona. Chatbot content should also draw from the PCD's memory of what people like and dislike based on what they've told it or what it gleans from facial expression reactions to things like pictures, songs, jokes, etc.
[00495] The PCD 100 may periodically ask family members questions designed to entertain.
[00496] The PCD 100 may have several elegant ways of expressing incomprehension that encourage users to be forgiving if it is unable to understand a user despite requests to repeat the utterance.
[00497] The PCD 100 may have severable likeable idiosyncratic behaviors it expresses from time to time, such as specific preferences, fears, and moods.
[00498] The PCD 100 may have a defined multimodal disambiguation paradigm, which may be designed to elicit patience and forgiveness from users.
[00499] The PCD 100 may have several elegant ways of expressing it understands an utterance but cannot comply or respond satisfactorily.
[00500] The PCD 100 may sometimes amuse itself quietly in ways that exhibit it is happy, occupied and not in need of any assistance.
[00501] The PCD 100 may have several ways to exhibit it is thinking during any latency incident, or during a core server update.
[00502] The PCD 100 may have several ways of alerting users that its WiFi connectivity is down, and also that WiFi has reconnected. Users can always reactivate WiFi from the settings and by using the QR code from the PCD Link. [00503] The PCD 100 may have a basic multimodal navigation paradigm that allows users to browse through and enter skills and basic settings, as well as to exit active skills. Advanced settings may need to be entered via PCD Link.
[00504] The PCD 100 may have the ability to have its Administrator "lock" it out so that it cannot be engaged, beyond an apologetic notification that it is locked, without a password.
[00505] The PCD 100 may be able to display available WiFi networks on command. The PCD 100 may display available WiFi networks if the WiFi connection is lost. The PCD 100 may provide a way to enter the WiFi password on his screen.
[00506] The PCD 100 may have a visual association with each known member of the family. For example, Jim is always Blue, Jane is always Pink, Mom is always Green, and Dad is always Purple. When the PCD 100 interacts with that member of the family, that visual scheme should be dominant. This visual identifier can be used throughout the PCD's skills to ensure family members know the PCD 100 recognizes them.
[00507] The PCD 100 may recognize smiles and respond in a similar manner
[00508] The PCD 100 may play pictures from its PCD Snap photo album in slide show mode while it's in Be and if the user is in the picture, The PCD 100 may say "you look particularly good in this one". Sometimes the PCD 100 may look at its "own" photos, like of the first Macintosh, or R2D2, or pinball machines but then pictures of his family are included from time to time also.
[00509] The PCD 100 may often exhibit happiness without requiring interaction. For example, it plays pong with itself, draws pictures on its screen like the Mona Lisa with a PCD 100 as the face. Over time, these skills may evolve (e.g., starts with lunar lander ASCII game or stick figures then progresses to more complex games). In some embodiments, the PCD 100 may have a pet, such as a puppy, and its eye may become a ball the dog can fetch. The PCD 100 may have passive back and forth with his dog. It may be browsing through its skills, such as reading cookbooks. It could be dancing to some kind of limited library of music, practicing its moves. Sometimes it is napping. In some embodiments, the PCD 100 may write poems, such as Haikus, based on family facts with gong. In other embodiments, the PCD 100 may be exercising and giving itself encouragement. In other embodiments, the PCD 100 may play instruments, watch funny you tube clips and chuckle in response, execute a color by numbers kids game, move to cause a ball to move through a labyrinth and play Sudoku. The PCD 100 may have its own photo album and collects stamps.
[00510] In some embodiments, the PCD 100 may engage in and display a ping-pong based game wherein side to side movements control a user's paddle in play against the PCD 100. [00511] If the PCD 100 is running on battery power, there may be an icon on its screen showing remaining battery life.
[00512] If people praise the PCD 100 in a social context rather than a task context, it may exhibit "delight/affection" animation.
[00513] When in a group, the PCD 100 may engage with one person at a time. It may only turn to engage someone else if they indicate a desire to speak with the PCD 100 AND the person the PCD 100 is currently engaged with remains silent or otherwise disengages. In embodiments the PCD may use various non-verbal and paralinguistic social cues to manage multi-person interactions simultaneously.
[00514] The PCD 100 may have a basic timer functionality. For example "PCD, let me know when 15 minutes have passed."
[00515] The PCD 100 may be able to create a tone on a phone that is connected to it via PCD Link to assist users in locating a lost phone that is within WiFi range. The ability to control whether someone can create this tone on a PCD Linked phone that is not their own device may be configurable via Administrator settings.
[00516] The PCD 100 may have a stopwatch functionality similar to that used in current smartphones.
[00517] The PCD 100 may have a built in clock and be able to tell the time in any time zone if asked. Sometimes, The PCD 100 may display the time, other times it may not, based, at least in part, on its level of engagement and what it is doing. The PCD 100 may have an alarm clock functionality. For example "The social robot, let me know when its 3:30pm". There may be a snooze function included. The PCD 100 may have several alarm sounds available and each family member may set their preferred alarm sound. If no preferred alarm sound is set, the PCD 100 may select one.
[00518] The PCD 100 may have established multi-party interaction policy, which may vary by skill.
[00519] The PCD 100 may have a quick "demo reel" which it can show if asked to "show off its capabilities.
[00520] The PCD 100 may have specified but simple behavior options when it encounters and recognizes another PCD 100 by voice ID if it is introduced to another PCD 100 by a family member. In embodiments, a PCD 100 may have specific, special behaviors designed for interacting with another PCD 100.
[00521] In accordance with exemplary and non-limiting embodiments, a given skill or behavior (such as an animation, speech, or the like) may manifest differently based on other attributes associated with a PCD 100. For example, the PCD 100 may be programmed or may adapt, such as through interactions over time with a user or group, to have a certain personality, to undertake a certain persona, to operate in a particular mode, to have a certain mood, to express a level of energy or fatigue, to play a certain role, or the like. The PCD SDK may allow a developer to indicate how a particular skill, or component thereof, should vary based on any of the foregoing, or any combination of the foregoing. For example, a PCD 100 may be imbued with an "outgoing" personality, in which case it may execute longer, louder versions of speech behaviors, as compared to an "introverted" PCD 100 that executes shorter, quieter versions. Similarly, an "active" PCD 100 may undertake large movements, while a "quiet" one might undertake small movements when executing the same skill or behavior. Similarly, a "tired" PCD 100 might display sluggish movements, slow speech, and the like, such as to cue a child subtly that it is time for bed. Thus, provided herein is a social robot platform, including an SDK, that allows development of skills and behaviors, wherein the skills and behaviors may be expressed in accordance with a mode of the PCD 100 that is independent of the skill. In embodiments, the PCD 100 may adapt to interact differently with distinct people, such as speaking to children differently from adults, while still maintaining a distinct, consistent persona.
[00522] In accordance with various embodiments, a wide range of skills may be provided. Important skills include meeting skills (including for first and subsequent meetings, such as robot-augmented video calls), monitoring skills (such as monitoring people and/or pets in the home), photographer skills, storytelling skills (and multi-media mashups, such as allowing a user to choose at branch point to influence the adventure plot, multi-media performance-based stories, and the like), game-playing skills, a "magic mirror" skill that allows a user to use the social robot as an intelligent mirror, a weather skill, a sports skill, or sports buddy skill that interacts to enhance a sports program or sports information or activity like fantasy sports, a music skill, a skill for working with recipes, serving as an intelligent interactive teleprompter with background/animation effects, and a coaching skill (such as for medication compliance, personal development, training, or the like).
[00523] To facilitate automated speech recognition (or other sound recognition), the methods and systems disclosed herein may undertake beam forming. A challenge is that one may desire to allow a user to call attention of the social robot, such as by using a "hot phrase," such as "Hey, Buddy." If the PCD 100 is present, it may turn (or direct attention), to the voice that uttered the hot phrase. One way to do that is to use beam forming, where there are beams (spatial filters or channels) that point to different locations. Theoretically each spatial filter or channel, corresponding to a beam, take sound from that channel and seeks to disregard the other channels. Typically people do that in, for example, polyphone devices by picking up the beams with the highest volume and assuming that the highest volume beam is the one for the person talking. The methods and systems disclosed herein may undertake improved beam forming and utilization, such as in order to pick up the beam of the person who says the hot phrase. In embodiments, the social robot platform disclosed herein may have a distinct instance of the speech recognizer for each beam, or for a sub-set of beams. Thus, each speech recognizer is listening to a cone of space. If the device is among, for example, a group of four people, and one person says "Hey Buddy," the device will then see that someone is calling attention from the direction of that speaker. To implement that, the systems and methods may have a speech recognizer per channel or subset of channels.
[00524] Ideally one may wish to maintain the orientation of the beam based on the PCD's motion/orientation. The system that is running the beam forming may receive information from the motor controllers or may receive location or orientation from an external system, such as a GPS system, a vision system or visual inputs, or a location system in an environment such as a home, such as based on locations of IOT devices. The motor controllers, for example, may know the angle at which the PCD 100 rotates the PCD 100, then the PCD 100 may need to find its coordinates. This may be accomplished by speaking the hot phrase again to re-orient it, or by taking advantage of other location information. Person tracking may be used once a speaker is located, so the PCD 100 may move and turn appropriately to maintain a beam in the direction of the speaker as the speaker moves, and other perceptual modalities may augment this, such as tracking by touch, by heat signature, or the like. In embodiments, integration of the sound localization and the visual cues may be used to figure out which person is trying to speak to the PCD 100, such as by visually determining facial movement. In embodiments, one may also deploy an omnidirectional "low resolution" vision system to detect motion in the room, then direct a higher quality camera to the speaker.
[00525] In other exemplary embodiments, the methods and systems disclosed herein may use tiled grammars as part of phrase spotting technology. To do effective phrase spotting, one may preferably have short phrases, but the cost of building phrase spotting is higher depending on how many different phrases one must recognize. To distinguish between, for example, ten contents, the more you have different distinct phrases, the costlier it becomes (geometrically). In embodiments, the methods and systems disclosed herein may break the phrases into different recognizers that run simultaneously in different threads, so each one is small and costs less. Now one may introduce a series of things, since the concept of phrase spotting lets you find content-bearing chunks of speech. For example, take the phrase: "Hey Buddy, I want to take a picture and send it to my sister." Two chunks likely matter in most situations: "take a picture" and "send it to my sister." Depending on one phrase spotting thread, one can trigger another, modified, phrase spotting recognizer. One can build a graph of recognizers (not just a graph of grammars, but actual recognizers), each of which recognizes particular types of phrases. Based on the graph, a recognizer can be triggered by an appropriate parent recognizer that governs its applicability and use. Thus, provided herein is an automated speech recognition system with a plurality of speech recognizers working in parallel, the speech recognizers optionally arranged according to a graph to permit phrase spotting across a wide range of phrases.
[00526] Details of the methods, tools, user interfaces, and techniques (generally SDK elements) for developing robot skills and assets are depicted in the accompanying figures and described below herewith. These elements generally may be accessible to developers and the like through one or more computer servers adapted with the software, data structures, data bases, user interfaces, and the like described herein. While each element is described in terms of its requirements, inputs, uses, operation, interaction with other elements, and outputs, together these elements provide a comprehensive environment for developing, testing, and instantiating social interaction and other skills and assets for skills in a persistently present social robot. The elements here are designed to address many complex technical problems related to physical aspects of the robot, such as positioning, movement, control, input sensing, display, light, sound, aroma generation. These elements are also designed to provide access to complex algorithms operating on one or more processing facilities of the robot to instill capabilities such as developing rapport with humans, expressing human-like emotions, detecting and responding in a near-human way to the robot's environment, and the like.
[00527] The social robot Software Developer Kit (SDK) is a web-based application tool adapted to give developers an easy way to build social robot skills (e.g., robot applications). The social robot SDK facilitates skill development via a number of primary components that are compatible with, among other things, Atom/Electron and Node.js. These include animation, behavior creation, skill simulation, natural language interactions with non-verbal and paralinguistic social queues, and robot maintenance. With the SDK, one has access to many aspects of the social robot, including the social robot's speech technology, facial recognition and tracking, touch input technology, movement systems, vision systems and the like, as well as higher level functions and skills of the social robot that are built using those. [00528] In an embodiment, the social robot SDK may be downloaded from an Atom Package Manager via a suitable user interface. The Social robot Atom package (called social robot-sale) may be built on Atom™, an Integrated Developer Environment (IDE) that is built on Electron™. Social robot skills may be written in JavaScript, for example and run on Electron™. One may author social robot skills directly in the social robot Atom package GUI or by making direct API calls in JavaScript. The social robot Atom package includes a simple UI and may provide access to Chrome DevTools™, a library of pre-created animations, behaviors, images, and sound effects, and tools and markup language for designing high expressive, character-rich performance of spoken lines with emotive overlays and non-verbal or paralinguistic cues plus the ability to create skills from scratch.
[00529] The social robot command line interface (CLI) may provide the ability to generate and deploy skills directly through a command line interface. Both the social robot Atom package and the social robot CLI may provide access to the social robot Simulator, which allows one to preview animations, 3D body movements, interactions, expressions, and other manifestations of skills before sending them to a social robot.
[00530] In accordance with exemplary and non-limiting embodiments, the social robot's animation system may be responsible for coordinating expressive output across the robot's entire body, including motors, light ring, eye graphics and the like. The system may support playback of scripted animations, as well as real-time procedurally rendered expressive behaviors such as expressive look-at and orientation behaviors. Additionally, the system ensures that the robot transitions smoothly from pose to pose or from the end of one animation or expressive behavior into the next.
[00531] The major elements of the PCD SDK may include:
• Skill Structure - a data structure to which most skill adhere;
• Main Script - a set of skill entry points that are exported when the robot initializes;
• Styling - a data structure for performing style operations with the body parts of the robot;
• Builders and Instances - data structures that carry data for implementation of skill elements;
• Degrees of Freedom (DOFs) - representation of dynamic elements controlled by an animation module;
• Animations - controlling playback of scripted animations;
• Transitions - ensuring smooth motion from one animated behavior to the next; • Gaze/Orient Behaviors - used to expressively direct parts of the social robot toward locations of interest;
• Animation Editor - graphical interface for creating animation files interactively using distinct animation layers;
• Behavior Editor - permits editing of a tree of behaviors for each skill;
• Speech Rules Editor - create rules to be used when speech is detected via the robot's Natural Language Understanding or listening capability; and
• MIM editor - create interactive dialog rules and functionality for natural sounding dialog.
PCD SDK
[00532] Referring to FIG. 27 that depicts a block diagram of an architecture of a PCD-specific software development kit, a PCD Software Developer Kit (SDK) 2704 may operate as a web- based application tool adapted to give developers an easy way to build PCD Skills (robot applications). The PCD SDK consists of four primary components some of which may have distinct user interfaces, tool sets, APIs and the like. The primary components include an animation component 2706, a behavior creation component 2702, a skill simulation component, and a robot maintenance component. With the SDK, one has access to the PCD's speech technology, facial recognition and tracking, touch input technology and other sensory inputs and expressive outputs.
[00533] One may author PCD Skills directly in the component user interfaces or by making direct API calls in a programming language, such as JavaScript. The PCD SDK may include user interfaces and may provide access to a library of pre-created animations, behaviors, images, and sound effects, and the like plus the ability to create skills from scratch.
[00534] The PCD SDK may facilitate generating skills that the PCD may perform. In accordance with exemplary and non-limiting embodiments, skills may require adherence to a basic structure that the SDK may facilitate creating. In exemplary embodiments, the structure described below may be initiated by default when one creates a new skill via the SDK's skill generator capabilities such as the behavior and animation editors described herein. Each skill may be required to have an index.html file that implements the user interface (UI) of the skill and contains elements such as a link to a style file, a tag that defines a division or section of code, and an id of PCD containing all UI elements.
[00535] Each skill may also be configured with a main script that is responsible for exporting one or more skill functions during execution of the skill by the PCD. [00536] Actions of a skill may be configured with the PCD SDK with certain margins or paddings before and/or after the actions to facilitate realistic transitioning from one action to another, such as from an action of taking a photograph to displaying the captured image. The PCD SDK facilitates configuring skills, transitions among skills, and relevant PCD expressions.
[00537] The PCD command line interface (CLI) may provide the ability to generate and deploy skills directly through a command line interface. Both the PCD Atom package and the PCD CLI may provide access to the PCD Simulator, which allows one to preview animations, 3D body movements, interactions, and expressions before sending them to a PCD.
BEHAVIORS & BEHAVIOR EDITOR
[00538] With reference to FIG. 27, behavior editor 2702 in the SDK 2704 may include trees, nodes, leafs, parents, decorators and the like. Behaviors may be accessible in the SDK 2704 as classes that can be configured, ordered, simulated, and deployed with the SDK 2704. Exemplary behavior classes include Blink, Execute Script, ExecuteScriptAsync, Listen, ListenEmbedded, ListenJs, LookAt, Nul, Parallel, PlayAnimation, PlayAudio, Point3D, Random, ReadBarcode, Sequence, Subtree, SubtreeJs, Switch, TakePhoto, TextToSpeech, TextToSpeechJs, TimeoutJs, and the like.
[00539] Behavior trees are an expressive tool which may be used to model the behavior and control flow of autonomous agents. They are popular in the robotics and video game industries for their ability to coordinate concurrent actions and decision-making processes. Unlike state machines (where there is a single active state at any given time) behavior trees can run multiple behaviors in parallel. This makes them very powerful tools for coordinating all of the PCD sensory input with expressive output. Behavior trees are hierarchical, unlike state machines, which are represented as graphs.
[00540] Each created skill may contain the main.bt behavior tree by default. This is the behavior tree that executes when the skill is run. One may create additional behavior trees in a skill for main.bt to reference. Behaviors may execute from the top to bottom of a behavior tree unless otherwise specified.
[00541] With reference to FIG. 28, there is illustrated an exemplary and non-limiting embodiment of a behavior tree snippet 2800 in which two behaviors are executed at the same time, then a second behavior, then a third behavior. Specifically, playing the camera animation and taking a picture are first performed in parallel as part of the first behavior. Upon completion, second and third behaviors, comprising displaying photos and removing the photo from the display, respectively, are performed. [00542] A behavior is a node in the behavior tree that performs an action. This could be a very simple action (like playing an audio file) or a complex one (like asking someone's name).
[00543] Behaviors may assume at least four distinct states:
a. INVALID: The behavior has not started yet
b. IN PROGRESS: The behavior is actively performing some action like playing an animation, or speaking using text-to-speech.
c. SUCCEEDED: The behavior has finished its task successfully.
d. FAILED: The behavior has finished its task unsuccessfully. Note that this does not necessarily mean an error has occurred.
[00544] A leaf behavior has no children. It is responsible for performing one action, like playing an audio file or making PCD speak.
[00545] With reference to FIG. 29, there is illustrated an exemplary and non-limiting embodiment of a leaf behavior. This illustrated leaf behavior plays an animation, takes a photo, pauses for a timeout, and executes JavaScript code.
[00546] In general, a parent behavior may have children. Its children may be either leaf behaviors or other parent behaviors. In the SDK 2704, there are four core parent behaviors.
a. Sequence: A Sequence will play its children in order from top to bottom. While any of its children are an IN PROGRESS state, the Sequence will also be in an IN PROGRESS state. If all of a Sequence's children return with status SUCCESS, the Sequence will return with status SUCCESS. As soon as any of its children return with status FAILED, then the Sequence will immediately return with status FAILED.
b . Parallel : A Parallel parent behavior will start all of its children at the same time .
A Parallel will remain IN PROGRESS until either one of its children fails (in which case the Parallel will return status FAILED) or until all of its children have SUCCEEDED (in which case the Parallel will return with status SUCCESS).
c. Switch: A Switch behavior is how behavior trees deal with branching logic. A Switch will test all of its children in sequence until one succeeds, at which point the Switch will execute that child and then return with status SUCCEEDED. A Switch will always succeed even if all of its children fail.
d. Random: A Random behavior will choose one of its children at random. If that child fails, then Random will fail. It the child succeeds, then Random will succeed. [00547] With reference to FIG. 30, there is illustrated an exemplary and non-limiting embodiment of both sequence and parallel parent behaviors. The Sequence parent behavior executes its children in order from top to bottom. The Parallel parent behavior executes all its children at the same time.
[00548] Decorators are not nodes in the tree. They are components that can be added onto a behavior to modify the state of that behavior. Decorators can do four things:
a. Prevent a behavior from starting until some condition is met.
b. Force a behavior that is IN PROGRESS to succeed.
c. Force a behavior that is IN PROGRESS to fail.
d. Restart a behavior that has either succeeded or failed.
[00549] Example decorator classes accessible through the SDK 2704 include:
a. Case that performs a single check before the behavior it's decorating starts. If that check fails, Case fails the behavior
b. FailOnCondition that explicitly interrupts a behavior that it is decorating if a condition being evaluated is met.
c. StartOnAnimEvent that begins execution of its behavior when an animation fires an event from its event layer.
d. StartOnCondition that prevents the behavior that it is decorating from starting until a condition that it is evaluating is met.
e. StartOnEvent that prevents the behavior that it is decorating from starting until an event is emitted from a behavior tree's global emitter.
f SucceedOnCondition that explicitly interrupts the behavior that it is decorating if the condition that it is evaluating is met.
g. SucceedOnEmbedded that succeeds the behavior that it is decorating when the specified audio phrase is spotted.
h. SucceedOnEvent that succeeds the behavior that it is decorating when an event is emitted from the behavior tree's global emitter.
i. SucceedOnListen that performs audio speech recognition and applies and parses the results according to a rules file.
j . SucceedOnListenJs that performs audio speech recognition, and parses and applies the results according to a rules file,
k. TimeoutFail that forces the behavior it is decorating to fail after the specified amount of time. 1. TimeoutSucceed that forces the behavior that it is decorating to success after the specified amount of time,
m. TimeoutSucceedJs that forces the behavior that it is decorating to succeed after the specified amount of time,
n. WhileCondition that will evaluate a condition after its component successfully succeeding. If the condition being evaluated is met, the component is started again. If the condition is not met, the status of the component is delivered (returned) to the function that activated the decorator.
[00550] With reference to FIG. 31, there is illustrated an exemplary and non-limiting embodiment of a Case decorator preventing the Sequence behavior from starting until a condition is met. The StartOnAnimEvent decorator prevents the TakePhoto behavior from starting until an event has been emitted from the camera animation in the Play Animation behavior.
[00551] With reference to FIG. 32 there is illustrated an exemplary and non-limiting embodiment of a user interface rendering of a main behavior tree of a skill.
[00552] Each skill contains the main behavior tree by default. This is the behavior tree that executes when the skill is run. One may create additional behavior trees in a skill for main.bt to reference.
[00553] Behaviors execute from the top to bottom of a behavior tree unless otherwise specified.
[00554] With reference to FIG. 33 there is illustrated an exemplary and non-limiting embodiment of TextToSpeech leaf behavior as a child of the Parallel parent, which is a child of the Sequence parent. The TextToSpeech behavior has a StartOnAnimEvent decorator and a description, there are three types of behaviors that can be added to a behavior tree: parent behaviors, leaf behaviors, and decorators.
[00555] Parent behaviors can have child behaviors. Children may be leaf behaviors or other parent behaviors.
[00556] There are four parent behaviors in the Behavior Editor:
a. Sequence: executes its children in order from top to bottom.
b. Parallel: executes its children all at once.
c. Switch: executes the first child that succeeds from top to bottom. d. Random: executes a random one of its children.
[00557] In some embodiments, leaf behaviors cannot contain other behaviors. They execute a single action, like playing an audio file or making PCD speak. [00558] As illustrated, decorators modify when a behavior starts, succeeds, or fails. They can also restart behaviors.
[00559] With reference to FIG. 34 there is illustrated an exemplary and non-limiting embodiment of a decorator configured to change a state of a behavior based on a measureable condition.
[00560] With reference to FIG. 35 there is illustrated an exemplary and non-limiting embodiment of a user interface for specifying arguments of a behavior.
[00561] When a behavior is selected in the Behavior Tree, any arguments available for that behavior appear in the Behavior Arguments pane. When a decorator is selected in the Decorator
Pane, any arguments for that decorator appear in the Decorator Arguments pane.
[00562] There are eight types of arguments in the Behavior Editor:
a. string
b. file
c. integer
d. number
e. enum
f. boolean
g. function
h. SSML subset
ANIMATIONS & ANIMATION EDITOR
[00563] In accordance with exemplary and non-limiting embodiments, the PCD's animation system may be responsible for coordinating expressive output across the robot's entire body, including motors, light ring, eye graphics and the like. The system may support playback of scripted animations, as well as expressive gaze and orientation behaviors. Additionally, the system ensures that the robot transitions smoothly from pose to pose or from the end of one animation or expressive behavior into the next.
[00564] Referring to FIG. 36 that depicts an illustration of the lifecycle of builders and instances across configuration, activation, and run/control, builders may be used to store configuration information and other data, and then may be used (and re-used) to spawn active instance handles. A few examples of builders and instances may include AnimationBuilders spawning Animationlnstances, LookatBuilders spawning Lookatlnstances, TransitionBuilders spawning "in-between" motions, and the like. A builder 3602 may be configured during a configuration process step 3604. During an active process step 3608, a builder 3602 may receive a start input that may initiate the builder creating an instance 3610 of the builder based on, for example, current parameters. During a run/control process 3612, a builder 3602 may stand ready to be reused while the initiated instance 3610 may perform actions such as providing motor/graphic/LED values to operational elements of the PCD. In embodiments, an initiated instance of a builder may be used only once.
[00565] Referring to FIG. 37, that depicts a diagram that provides a map of the robot's individual degrees of freedom (DOF)s, DOF value types, and common DOF groupings, dynamic elements that are synchronized via the animate module may be represented as degrees-of-freedom. DOFs may represent an animate module's smallest separable units that a user of the PCD SDK may control. Some DOFs control the rotation of the PCD's body motors, other DOFs control color channels of the LED light ring, and others control graphical parameters related to the on-screen eye and overlay. While DOFs can be controlled individually, they are typically grouped with other DOFs into DOF sets. Commonly-used DOF sets are provided, for example, in resources available through the SDK. FIG. 37 provides a map of the robot's individual DOFs (in italics), DOF value types, and common DOF groupings (in CAPS):
[00566] Referring to FIG. 38, that depicts an animate module following a policy of exclusive DOF ownership by the most recently triggered animate instance, an animate module of the PCD SDK may follow a policy of exclusive DOF ownership by a most recently triggered instance. In FIG. 38, instance A, which is controlling the robot's body, LED, and eye, is interrupted by Instance B, which is configured to control only the PCD's body DOFs. As soon as Instance B is triggered, it assumes exclusive control over the robot's body, while Instance A continues to control the robot's LED and eye. Various other combinations and overlapping instances of DOF control are incorporated herein.
[00567] In addition to determining which DOF is controlled by a builder instance 3610 of an animation provided by a builder 3602, animation operation control may be adjusted by parameters or arguments such as those depicted in FIG. 39 including setSpeed 3902, setNumLoops 3904, or related methods.
[00568] Referring to FIG. 40, that depicts configuring a transaction between animations, transitions ensure smooth motion from the end of one animated behavior into the beginning of the next. When an instance of AnimationBuilder 4002 is selected by a user of the PCD SDK for configuring animation(s), the SDK may automatically generate a transition motion 4004 from the PCD's current state (e.g., its body state) to the start of the selected animation. This transition motion may be inserted in a skill flow before the animation instance, as illustrated in FIG. 40. The transition motion may have a short (or even zero) duration if the PCD current state and the PCD state for the selected animation already line up well. [00569] In accordance with exemplary and non-limiting embodiments, an AnimationBuilder 4002 function of the PCD SDK may come pre-configured with a system-default TransitionBuilder 4004 that may be used to generate the inbound transition for the animation being defined by the AnimationBuilder 4002. This default behavior may be modified via the builder's setTransitionln function. This function may be set via a command, such as animationBuilder.setTransitionIn(transitionBuilder) or via a user interface of the SDK. The SDK may offer at least two basic types of transition builders. For example, LinearTransitionBuilders may generate transition motions using simple linear blending, while AccelerationTransitionBuilders may generate motions that obey configurable acceleration limits. Transition builders may be created from scratch, or by cloning and modifying an existing transition; the SDK may facilitate any of these modes of generating and customizing transitions.
[00570] Referring to FIG. 41, that timing of exemplary core animation events, an animation such as an instance of an animation as described herein that may include, for example playback of an animation sequence may be monitored via a number of different events, including STARTED, STOPPED, and CANCELLED, plus other custom events. Event listeners for monitoring active animation instances may be installed using an event listening feature of the AnimationBuilder. In the example of FIG. 41, aSTARTED event fires when the animation instance begins, after the completion of any inbound transition motion, if applicable (e.g., the event fires at animation-time 0). The STOPPED event fires when the animation instance finishes or gets completely interrupted. A STOPPED event's interrupted property can be checked to differentiate these cases. As an example, payload.interrupted === true if a newly- triggered instance truncates a currently executing animation, or if the currently executing animation's stop method is executed before the end of the animation. The CANCELLED event fires if the animation instance is removed or completely interrupted before it is ever STARTED .
[00571] Animations may continue to produce events as long as they remain in control of at least one of the robot's DOFs. The STOPPED event fires when no DOFs remain for a particular animation. Thus, it is common for multiple animation instances to be producing events at the same time. FIG. 42 depicts a timeline of events 4202 produced by two overlapping animation instances. Animation A 4204 controls DOFs for body, LED, and eye. Animation B 4208 controls only the body DOF, so it interrupts animation A body DOF, but does not interrupt Animation A LED or eye animations. Events 4202 are produced for Animation A 4204 LED and eye DOFs while Animation B 4208 produces events for the body DOF. [00572] Referring to Figs. 43 and 44, that depict an example of a gaze orientation configuration interface, represent a capability of the methods and systems for configuring and operating the animation capabilities to perform gaze and orientation by the PCD. Gaze and orientation capabilities of the PCD may be triggered to produce expressive gaze behaviors. These behaviors may be used to expressively direct PCD's body and/or eye towards locations of interest in the surrounding environment.
[00573] Gaze behaviors may be managed in a fashion similar to animations. Behavior triggering is accomplished using LookatBuilder objects. First, one creates a builder using the createLookatBuilder method. Next, one may optionally configure the builder. Finally, one triggers an instance of the behavior via the builder's startLookat method.
[00574] Gaze behaviors may operate in one of two modes: single-shot mode or continuous mode. This mode may be configured via the builder's setContinuousMode method. In single- shot mode, a gaze behavior is much like a scripted animation, expressively orienting the robot towards the selected target and then stopping once the target is reached. In continuous mode, the behavior never stops on its own, and the target location may be repeatedly modified using the instance's updateTarget method. This may be useful for face tracking or other situations where the PCD is following a moving target. FIG. 43 depicts configuring a single-shot mode gaze operation. In continuous mode, the target point may be updated at any time. FIG. 44 depicts including custom code with the PCD SDK with the LookAt function to toggle between two different gaze targets every three seconds.
[00575] Referring to FIG. 45, that depicts a three dimensional coordinate system of the social robot referenced by the software development kit, providing targets to the gaze API may sometimes require doing math with 3D vectors. It may be preferred to use a preconfigured module for 3D vector arithmetic and other linear algebra operations. A new 3D vector object may be created through the following example: let target = new animate.THREE.Vector3(l .0, 0.0, 1.0). In embodiments, the PCD may use a 3D coordinate system with its origin 4502 at the center of the robot's base. From the PCD's perspective, the positive X axis 4504 points forward, the positive Y axis 4508 points left, and the positive Z axis 4510 points up, as illustrated in the diagram of FIG. 45.
[00576] Referring to FIG. 46, that depicts a user interface of the software development kit for editing animations, animations may be created via this interface. Animations may be configured and stored in individual animation files. The animation editor of FIG. 46 may include a body pane 4602, an eye pane 4604, and a timeline pane 4608. Each pane may be zoomed; the color of each pane may be changed; the panes of the animation editor may be reset. The body pane 4602 facilitates direct control of the body segments through point, click, and drag functionality.
[00577] The timeline pane 4608 may be used to adjust animation length by adjusting the default animation of 30 frames at a frame rate of 30 frames per second. Timelines in the timeline pane may be scaled, such as by a factor of two and the like. Certain frames in the timeline may be locked by setting a keyframe parameter. Frames in the timeline pane 4608 may be edited with commands such as cut, paste, copy, delete and the like. Frames on a time line may be operated via a simulation function so that the body pane 4602 and/or the eye pane 4604 may depict the sequence of animation frames selected in the timeline pane 4608. Simulations may be operated continuously, one frame at a time, or until a stop command is entered, and the like.
[00578] The Animation Editor enables the creation of animations via layers. One selects a layer in the Animation Editor in order to create an animation for that layer.
[00579] In some cases, multiple layers of the same type will blend. Some layers are additive. For example, a layer that moves the PCD's eye up by 1 and a layer that moves the eye down by 2 will blend to create an animation that moves the eye down by 1. Other layers are multiplicative. For example, a layer that scales the PCD's eye by 3 and a layer that scales the PCD's eye by 2 will blend to create an animation that scales the PCD's eye by 6. Layers that do not blend will default to the animations applied on the top layer.
[00580] FIG. 47 a display screen in which violation of limits of the PCD may be indicated. When creating an animation, if the movements you ask the PCD to perform will cause him to exceed the internal hardware limits of his motors, the frames during which he moves too quickly turn red 4702 and an error message will appear in the Properties pane 4704. Animation errors can occur if a movement violates acceleration limits or velocity limits. These messages are just warnings. The PCD may still attempt to perform the movements, but the hardware may not be able to support those movements. Limits are based on the additive accumulation of all body layers. For example, if one adds an excessively fast keyframe on one Body layer, but cancels it out on another Body layer, the resulting animation is allowed and the errors will not appear. If a limit is exceeded, the warnings appear on every body layer in every affected area, since any Body layer may be the culprit or the solution. To correct limit violations, one can either reduce the number of degrees the PCD moves between keyframes, or increase the number of frames between keyframes.
[00581] Handling of motion between frames is referred to herein as "tweening." For example, a linear tween will move the PCD's body at a steady pace between keyframes. Animation tweens can be adjusted to ease in and/or out of a keyframe more slowly to mimic natural movement. Tweening affects how the animation moves from the keyframe on which it's applied to the next set keyframe (left to right on the Timeline).
[00582] Referring to FIG. 48, that depicts a user interface for configuring an eye layer for controlling the representative eye image of the social robot, the user interface can be used to manipulate the size, position, shape, and rotation of a PCD's eye. Tweening can also be applied to eye animations. The eye of the PCD can be resized, scaled with or without constrained proportions, reshaped, rotated and the like. The user interface of FIG. 48 facilitates direct control of the eye through point, click, and drag functionality.
[00583] Referring to FIG. 49, that depicts a user interface for configuring an eye texture layer for controlling a texture aspect of the representative eye image of the social robot, eye colors and textures may be configured via this user interface. The PCD eye layer configuration user interface may include a body pane 4902, an eye pane 4904, and a timeline pane 4908. Simulations of animations may be depicted in the body pane 4902 and may include body, light ring, eye, and audio simulation. The eye pane 4904 may provide both control and simulation of animation of the eye 4910 including at least color and texture animation. Colors interpolate from animation frame to frame through the RGB space and may be additive. Colors may be blended on top of textures; therefore, configuring the eye texture to other than the default white will impact how each color is depicted due to the impact of the underlying texture color.
[00584] Referring to FIG. 50, a user interface for configuring an eye overlay layer for controlling the representative eye image of the social robot is depicted. The user interface may include a body pane 5002, an eye overlay pane 5004, and a timeline pane 5008. The Overlay layer appears on the PCD's screen in front of his eye. Overlays can be modified in the eye overlay layer user interface. In this user interface, x values increases to the right and y increases down from the center of the screen, which is by default the center of the eye and is located at (x=0,y=0). The overlay is constructed substantially identically to the eye, including substantially all of the same properties. While the eye overlay layer content appears in front of the PCD's eye by default, this can effectively be reversed by swapping textures (i.e., swapping the eye texture for the eye overlay texture). Eye overlay layers may be added, removed, moved, tweened independently of the eye, resized with or without constrained proportions, reshaped and rotated.
[00585] Referring to FIG. 51, that depicts a user interface for configuring an eye overlay texture layer for controlling the representative eye image of the social robot, a texture for an eye overlay can be configured. The user interface of FIG. 51 enables the overlay texture to be changed, the overlay color to be changed, and the like. Although textures on different layers may not blend, colors may blend through RGB space and are additive. If an overlay has a colored texture, color manipulations may blend with the texture color as well.
[00586] Referring to FIG. 52, that depicts a user interface for configuring a background layer for controlling the background of a representative eye image of the social robot, a display background layer texture may be configured. A background texture layer may appear on the PCD screen behind the eye. The background texture can be changed as well as the background texture color.
[00587] Referring to FIG. 53, that depicts a user interface for configuring an LED disposed around a body segment of the social robot, the LED can be controlled for aspects such as on/off, on-intensity, on-color, color temperature, color saturation, rate of change, and the like. These elements may be selected and or entered in the LED configuration user interface of FIG. 53.
[00588] Events are described herein above in regards to animation operational control for stopping, starting, interrupting and the like. In addition to eye, body, LED and other layers, event layers may also be configured and/or customized through the PCD SDK user interfaces. FIG. 54 depicts a user interface for configuring an event with information such as the event name and a payload for testing a condition of an associated animation upon activation of the event. Events may also link animations with behaviors that are described in reference to Figs. 27-35 herein. To create a behavior that occurs during a specific frame of an animation, one first creates an event. Events link animations and behaviors. The behavior editor material included herein describe how to use events in the Behavior Tool. Multiple event layers are allowed. All events will be executed; events set to the same frame will be executed in order from top layer to bottom layer.
[00589] Referring to FIG. 55, that depicts a user interface for configuring an audio event layer, audio may be added to an animation by associating an audio file or other content with an animation. The audio event layer user interface also includes controls, such as a scrubber icon to allow a user to select a portion of an audio file to associate with an animation.
SPEECH RECOGNITION & SPEECH EDITOR
[00590] With reference to Fig. 56, there is illustrated an exemplary and non-limiting embodiment of a speech rule called "reservations.rule", created in the PCD SDK Speech Rules Editor 5600.
[00591] The PCD exhibits Natural Language Understanding (NLU). PCD can recognize two types of speech recognition rules:
a. Embedded rules - built-in phrases that PCD can listen for. b. Custom rules - rules coded by the developer to instruct PCD to listen for a custom phrase or word.
[00592] Embedded rules are built into the SDK. Additionally, phrase-spotting is built into PCD's software, so there is no requirement to reach up to the cloud to process speech.
[00593] For example, in order to tell the PCD to do something when someone says ey PCD,' the ListenEmbedded behavior or SucceedOnEmbedded decorator may be used to do so by specifying 'Hey PCD' as the ruleName in the arguments, and the PCD will return the string hey PCD' when it hears someone say hey PCD'.
[00594] The PCD SDK may come preloaded with one or more rules, such as a rule that recognizes a speaker saying "Hey PCD". The PCD may come configured to listen for these rules; no rule coding is needed.
[00595] With reference to Fig. 57, there is illustrated an exemplary and non-limiting embodiment of the PCD idling until someone says hey PCD,' at which point the PCD stops idling and uses text-to-speech to say hello.
[00596] A custom rule may be created to cause the PCD to listen for something other than what's available through embedded rules. This may be accomplished via the use of, for example, Listen behavior, ListenJs behavior, SucceedOnListen decorator, or SucceedOnListenJs decorator to accomplish this.
[00597] The logic of a speech rule is simple: PCD listens for auditory input and returns string variables based on what he hears. (Speech rules in the SDK currently only return string values, even if PCD hears a number. Support for returning numbers and integers coming soon.) For example, you might create a string variable called book and tell the SDK to return the string air if PCD hears the word 'flight' and to return the word lodging if PCD hears the word hotel'. You might go further and tell the SDK to return the string air if PCD hears the word 'flight' or 'air' or 'plane' or 'airplane' and to return lodging if PCD hears hotel' or 'motel' or 'room' or 'suite'.
[00598] When designing speech rules, it's important to consider two things: (1) what variables should be returned for PCD to understand what he needs to accomplish and (2) the possible different ways a PCD user might articulate this information.
[00599] In general, a custom speech rule looks something like this:
[00600] TopRule = (
[00601] (phrase to listen for) {return_variable='string'}
[00602] };
[00603] or, for example,
[00604] TopRule = ( [00605] reserve me a flight {book- air1}
[00606] );
[00607] In the example above, PCD listens for the phrase 'reserve me a flight.' If he hears that exact phrase, the SDK returns a variable named book with value air. This isn't extremely useful though. Let's expand the rule to add a bit more complexity to it. Instead of only listening for
'flight,' let's add the option for PCD to listen for hotel' as well. We can use standard OR logic in our code.
[00608] TopRule = (
[00609] reserve me a
[00610] flight {book='air'} | hotel {book='lodging'}
[00611] );
[00612] Now PCD is listening for 'reserve me a' and then EITHER 'flight' OR hotel'.
[00613] The table below shows what the SDK would return for various spoken input using the above rule:
[00614]
Figure imgf000112_0001
[00615] The Listen behavior and SucceedOnListen decorator take file arguments in which one selects a rule file.
[00616] With reference to Fig. 58, there is illustrated an exemplary and non-limiting embodiment of creating custom listening rules for PCD using files of the ".rule" type.
[00617] With reference to Fig. 59, there is illustrated an exemplary and non-limiting embodiment of telling the PCD to listen for hey PCD'.
[00618] With reference to Fig. 60, there is illustrated an exemplary and non-limiting embodiment of creating dynamic listening rules for the PCD using JavaScript.
MIMS
[00619] To simulate natural dialog, the PCD SDK facilitates configuring and using mechanisms for introducing variability, handling errors, and coordinating Voice User Interface (VUI) and Graphical User Interface (GUI) interactions. The PCD SDK provides Multimodal Interaction Modules (MIMs) to remove the burden from developers and to provide consistency across skills. As the name implies, MIMs handle interactions that have more than one mode, including voice (VUI) and touchscreen (GUI) modes. When prompted with a question like "Do you want to share this photo?" the expectation is that the user can answer either by saying "yes" or by tapping a button on PCD's touchscreen. MIM behavior comprises a state machine that is configured across a variety of parameters including the MIM type (e.g., question, announcement, optional response, and the like), a speech recognition rule used to parse the user's utterances, text to speech (TTS) prompts, prompts when an audio input is expected but is not received (e.g., the user has not responded to a prompt), unmatched prompt responses, and the like. Multi Modal Interaction Modules (MIMs) are designed to make Text-to-Speech (TTS) behaviors more flexible in a predefined way. The PCD SDK MIM Editor allows you to match up TTS prompts with speech recognition rule files and API input.
[00620] Referring to FIG. 61, that depicts a MIM editor, a user may configure MIMs in a variety of ways. To make dialog feel natural, MIMs can be configured with any number of prompts. When the MIM Behavior executes, it picks one of the available prompts from the appropriate category. The prompt can be chosen randomly or using logic supplied in the Condition field.
[00621] For example, multiple Entry-Core prompts can be defined with Condition fields that filter them based on available data. In this example there are two Entry-Core prompts that are appropriate for situations where the PCD has not identified the speaker (Entry-1 and Entry-3). In these cases the PCD will choose one of the two randomly. However, if the speaker is identified, PCD will use the Entry-2 prompts. Because an arbitrary number of Prompts+Conditions can be defined for any MIM, the developer has a straight-forward way to add lots of prompt variability to dialog interactions.
[00622] By defining sample utterances in the MIM configuration, the PCD can automatically generate Graphical User Interface (GUI) controls that can be used as an alternative to voice interaction. The CheckLights MIM in FIG. 61 uses a YesNo rule to process the user's utterances. By providing "yes" and "no" as sample utterances, the PCD is able to present Yes and No touchscreen buttons on his screen. Tapping them produces the same result as saying "yes" or "no." In this example, the Failures to Trigger GUI field is set to 1. This tells the PCT to present the GUI only if there is a NoMatch error, meaning that the speaker is having trouble communicating with the PCD via voice. When this value is set to 0 the PCD will always present the GUI.
[00623] Like listen behavior described herein, MIMs are configured with a speech recognition rule to parse the user's utterances. When the PCD's automatic speech recognition (ASR) system returns a transcript of the user's utterance, this transcript is passed to the PCD's on-board Natural Language Understanding (NLU) parser. To determine the semantic meaning of the utterance, the parser looks for patterns that are defined in a rule file associated with the MIM. Rules may comply with a rule syntax that enables multiple responses to comply with a MIM expectation. In an example, a rule may be configured so that an expected response of YES may include "right", "I'm good", and the like.
[00624] FIG. 62 depicts a user interface of the PCD SDK that facilitates editing MIM rules. By typing a phrase into the input field 6202, a result object is displayed in the result pane 6208 for a given rule in the rule pane 6204.
[00625] Just like in human-to-human dialog, voice interactions with the PCD will sometimes include misunderstandings and recognition errors. MIMs recover from these errors as naturally as possible. As mentioned above, MIMs define a way for the PCD to respond when the PCD asks a question but can't make sense of the answer. By defining Nolnput and NoMatch prompts described above, the PCD has appropriate ways to re-ask and/or re-phrase the question.
[00626] The PCD SDK also includes an API for MIMs. This API may enable timing out when a reply is expected but is not received. Various features of a MIM may be controlled by the API including entry string, match strings, no_match strings, repeat, thanks strings, verbose response strings, and the like.
FLOWS
[00627] The PCD can be operated through the use of control flows that are made up of activities such as flow-executor activities, MIM activities, and behaviors. FIG. 63 depicts an exemplary flow editor of the PCD SDK. PCD control flows may be interpreted by a flow engine executing on the PCD. The PCD SDK includes a flow editor tool for creating and editing robot skills in a flowchart-like fashion. Flows can be configured in the flow editor tool by adding activities and controls for the activities. These controls are called flow-executor activities and are used to control activities in the flow. Flow-executor activities can be used to begin a flow, execute parallel flows, end flows, access another flow, execute a behavior tree, interrupt a flow, skip around in a flow using throws and catches, execute arbitrary JavaScript, and the like. Other activities that can be added to a flow include MIM activities and behavior activities. Flows can also include transition processing that facilitates using results of one activity in the flow to determine the next activity to perform.
[00628] In embodiments, the PCD SDK may be used to create skills and assets for operating the PCD. The basic steps for creating a skill include using the SDK user interface to open a new skill project; create an animation; create a behavior tree to perform activities, such as body movement, eye activity, and LED ring illumination; create a speech rule that directs the PCD to listen; create one or more MIMs to facilitate multi-modal interaction in response to the PCD applying the speech rule; create a GUI menu to display on the display screen of the PCD; create a flow that brings these created elements together into a coherent sequential flow of activities; use the simulator to validate that the flow of activities that comprise the skill is as expected; download the skill to the PCD.
[00629] A behavior editor may facilitate configuring a hierarchical structure of PCD behaviors. A behavior editor may facilitate controlling behavioral activity of the PCD based on a behavioral tree structure that may be comprised of nodes, leafs, parent behaviors, and the like. A behavior editor may facilitate configuring PCD behaviors in a hierarchical behavior tree that orders behavior in at least one of sequential and parallel order. A behavior tree comprises control sequences of a PCD to facilitate coordinating actions of PCD processing resources and decision-making processes. A behavior editor may enable a user to control at least one perceptual system and at least one expressive system of a plurality of expressive systems of the PCD. A behavior tree editor may facilitate controlling PCD behaviors based on at least four distinct behavior states including an invalid behavior state, an in progress behavior state, a succeeded behavior state, and a failed behavior state. An invalid behavior state may indicate that a specific behavior has not yet started. An in progress behavior state may indicate that the PCD is actively performing a specific behavior. A succeeded behavior state may indicate that the PCD has finished performing a specific behavior based on a determination that the behavior was successful. A failed behavior state may indicate that the PCD has finished performing a specific behavior based on a determination that the behavior was unsuccessful. A behavior editor may facilitate configuring a leaf behavior as a functional element of a behavior tree defining a PCD behavior in a hierarchical structure of behaviors, wherein the leaf behavior has no lower level behaviors. A behavior editor may facilitate configuring a set of PCD behaviors a combination of parent and child behaviors, wherein a child behavior is performed by the PCD after completion of a corresponding parent behavior. A parent behavior may be configured with a behavior editor to perform one or more child behaviors individually, such as in a predefined sequence, or in parallel. A child behavior may be selected by a parent behavior in a behavior tree based on at least one of sequential child behavior execution, parallel child behavior execution, switched child behavior execution, and random child behavior execution. A behavior editor may facilitate configuring at least one behavior decorator for a PCD behavior being configured by the behavior editor. A behavior decorator may operate on the PCD to modify a state of its corresponding behavior. A PCD SDK may include a user interface through which a behavior tree may be configured and associated with a PCD skill so that the behaviors in the associated behavior are executed when the skill executes on the PCD based on the behavior tree properties that are configured by a user with the behavior editor. A behavior editor may comprise a behavior tool suite of a PCD SDK. A behavior tool suite of a PCD SDK may comprise a behavior editor. A behavior editor may comprise a plurality of behavior user interfaces adapted to at least one of create, define, configure, simulate, revise, and deploy a behavior to a PCD.
[00630] An animation tool suite of a PCD SDK may include an animation editor adapted to facilitate controlling assets of a PCD, the assets of the PCD including at least one of a multi- axis body of a plurality of moveable segments, a light source, a display screen, and an audio output system. An animation editor may facilitate controlling PCD assets during transitions between animation actions. An animation editor may facilitate configuring a set of PCD animation actions based on animation builders that spawn execution by the PCD processor of at least one animation instance derived from a specific animation builder module. The at least one spawned animation instance controls assets of the PCD. Control of at least one of the assets is exclusively controlled by the animation instance while the animation instance is executing on the PCD. An animation tool suite of a PCD SDK may instantiate an animation builder in an execution sequence of a PCD. An animation tool suite may provide access to control a plurality of degrees of freedom of a PCD, wherein each degree of freedom is associated with an actionable physical aspect of the PCT. Degrees of freedom of a PCD may be associated with one or more moveable body segments of the PCD. Degrees of freedom of a PCD may be associated with an electronic display screen of the PCD. Degrees of freedom of a PCD may be associated with a light source, such as a light ring disposed on one or more body segments of a PCD. Degrees of freedom of a PCD may include an illustrated eye displayed on the electronic display of the PCD. An animation tool suite may facilitate assigning one or more degrees of freedom of a PCD to an animation instance. An animation tool suite may facilitate changing control of a degree of freedom of a PCD to between distinct animation instances. An animation tool suite may facilitate control of speed, interactions, and the like of animations of one or more degrees of freedom of a PCD. An animation tool suite for controlling degrees of freedom of a PCD may facilitate animation control based on events associated with an animation instance including started, stopped and cancelled animation events. An animation tool suite of a PCD SDK may include user interface control elements displayed in an electronic display of a computing device that may facilitate expressive gaze animation control of the PCD. Expressive gaze animation may be performed as a single movement to a gaze point that may be defined in a three-dimensional space relative to the PCD . Alternatively, expressive gaze animation may be performed as a continuous movement with gaze target tracking that facilitates animating the PCD to continuously gaze at the gaze target. An animation editor of a PCD SDK, such as one associated with an animation tool suite may include a body pane in which a visual representation of the PCD is depicted, an eye pane in which a visual representation of an eye displayed on the electronic display screen of the PCD is depicted, and a timeline pane in which animation actions are depicted as in a timeline of animations, transitions, and the like.
[00631] A multi-modal interaction module (MIM) editor for introducing variability, handling errors, and the like during interactions between a PCD and a human participant may facilitate controlling more than one mode of interaction. A MIM editor may facilitate configuring a plurality of multi-modal prompts to be produced by the PCD in response to a perceived condition, such as a spoken response, tactile input, a lack of response, and the like. A MIM editor facilitates controlling multi-modal responses during interactions with a participant, such as a human, another PCD, and the like. A MIM editor may facilitate controlling a display screen of the PCD so that it coordinates with audio output by the PCD, wherein the display screen is adapted to receive tactile input and associate that input with one of a plurality of response options based on a visual presentation on the display screen. A MIM editor may facilitate configuring a speech recognition rule that may control how a PCD's Natural Language Understanding processor interprets utterances from a human in proximity to the PCD. A MIM editor may include a user interface that may include a phrase input field, a result object display pane, and a speech recognition rule pane. A MIM editor may further facilitate recovery from errors or unrecognized responses through enabling user configuration of no-match and no-input actions to be taken by the PCD.
[00632] The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. The processor may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a coprocessor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like.
[00633] A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die).
[00634] The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, Internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server.
[00635] The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope. In addition, any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs.
[00636] The software program may be associated with a client that may include a file client, print client, domain client, Internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client.
[00637] The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope. In addition, any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs.
[00638] The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. [00639] The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types.
[00640] The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer to peer network, mesh network, or other communications network. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station.
[00641] The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. [00642] The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another.
[00643] The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure . Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it may be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context.
[00644] The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It may further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium.
[00645] The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high- level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions.
[00646] Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.
[00647] While the methods and systems described herein have been disclosed in connection with certain preferred embodiments shown and described in detail, various modifications and improvements thereon may become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the methods and systems described herein is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law.
[00648] All documents referenced herein are hereby incorporated by reference.

Claims

CLAIMS:
1. A system for developing a skill for a persistent companion device (PCD) comprising: an asset development library, accessible via an application programming interface (API) executing on a processor, configured to enable a developer to at least one of find, create, edit and access one or more content assets utilizable for creating a skill that is executable by the PCD;
an animation tool suite executing on the processor having one or more APIs via which operation of one or more physical elements of the PCD for the skill including at least two of an electronic display, a plurality of movable body segments, a speech output system, and a multi-color light source are specified by the developer, wherein the skill is executable by the PCD in response to at least one input that is defined by the developer;
a behavior editor executing on the processor for specifying one or more behavioral sequences of the PCD for the skill; and
a skill deployment facility executing on the processor for deploying the skill to an execution engine for executing the skill.
2. The system of claim 1, wherein the skill deployment facility deploys the skill via an API.
3. The system of claim 1, wherein the behavior editor facilitates operation of a sensory input system and an expressive output system of the PCD.
4. A system for enabling development of a persistent companion device (PCD) skill using a software development kit (SDK) comprising:
a logic level mapping system operating on a processor configured to map received inputs to the PCD to coded responses; and
a PCD behavior tool suite operating on the processor adapted to configure a perceptual engine of the PCD comprising:
a vision function system configured via the behavior tool suite to detect one or more vision function events and to inform the logic level mapping system of the one or more detected vision function events; and
a speech/sound recognition and understanding system configurable by the behavior tool suite to detect defined sounds and to inform the logic level mapping system of the detected speech/sounds; and a PCD animation tool suite operating on the processor adapted to configure an expression engine to generate one or more animations expressive of at least one defined state in response to at least one input and to transmit the one or more animations to the logic level mapping system for mapping of the animations to the inputs.
5. The system of claim 4, wherein the defined state is at least one of an emotional state, a persona state, a cognitive state, and a state expressing a defined level of energy.
6. A system for configuring a persistent companion device (PCD) to perform a skill, comprising:
a software development kit executing on a networked server, comprising
a plurality of animation user interface screens through which a user configures animation associated with the skill, the plurality of user interface screens facilitating specification of the operation of physical elements of the PCD including at least two of an electronic display, a plurality of movable body segments, a speech output system, and a multi-color light source; and
a plurality of behavior user interface screens through which the user configures behavior of the PCD for coordinating robot actions and decisions associated with the skill, the plurality of behavior user interface screens facilitating operation of an expressive output system of the PCD in response to a sensory input system of the PCD;
wherein a graphical representation of the PCD in at least one of the animation user interface screens and the behavior user interface screens represents the movement of the PCD in response to inputs based on the configuration by the user.
7. The system of claim 6, further comprising a gaze orientation user interface screen through which a user configures the PCD to expressively orient a display screen of the PCD toward a target located in proximity to the PCD as a point in a three-dimensional space relative to the PCD, the PCD responding to the target in at least one of a single-shot mode and a continuous target-tracking mode.
8. A system for animating a persistent companion device (PCD), comprising:
an animation editor executing on a networked server providing access to PCD animation configuration and control functions of the PCD via a software development kit; an electronic interface to a PCD, the PCD configured with a plurality of
interconnected moveable body segments, motors for rotation thereof, at least one light ring, an electronic display screen, and an audio system;
a PCD animation application programming interface via which the animation editor controls at least a portion of the features of the PCD; and
a plurality of animation builders configurable by a user of the animation editor, the animation builders spawning animation instances that specify active animation sessions.
9. The system of claim 8, further comprising a behavior transition system for specifying transition of the PCD from a first animation instance to a second animation instance in response to a signal.
10. A system for controlling behaviors of a persistent companion device (PCD), comprising: a behavior editor executing on a networked server providing access to PCD behavior configuration and control functions of the PCD via a software development kit;
a plurality of behavior tree data structures accessible by the behavior editor that facilitate controlling behavior and control flow of autonomous robot operational functions, the operational functions including a plurality of sensor input functions and a plurality expressive output functions, wherein the plurality of behavior tree data structures organize control of robot operational functions hierarchically, wherein at least one behavior tree data structure is associated with at least one skill performed by the PCD;
wherein each behavior tree data structure comprises a plurality of behavior nodes, each of the plurality of behavior nodes associated with one of four behavior states consisting of an invalid state, an in-progress state, a successful state, and a failed state; and
wherein each behavior tree data structure comprises at least one parent behavior node, the at least one parent node referencing at least one child behavior node and adapted to initiate at least one of sequential child behavior node operation, parallel child behavior node operation, switching among child behavior nodes, and randomly activating a referenced child behavior node.
11. The system of claim 10, wherein at least a portion of the behavior nodes are each configured with a behavior node decorator that functions to modify a state of its behavior node by performing at least one of preventing a behavior node from starting, forcing an executing behavior node to succeed, forcing an executing behavior node to fail, re-executing a behavior node.
12. A system for recognizing speech with a persistent companion device (PCD), comprising: a PCD speech recognition configuration system that facilitates natural language understanding by a PCD, the system comprising a plurality of user interface screens by which a user operates a speech rule editor executing on a networked computer to configure speech understanding rules comprising at least one of an embedded rule and a custom rule;
a software development kit comprising library of embedded speech understanding rules accessed by the user via the networked server; and
a robot behavior association function of the software development by which a user associates speech understanding rules with at least one of a listen-type PCD behavior and a listen success decorator that the user configures to cause the PCD to perform an operation based on a successful result of a condition tested by the listen success decorator.
13. A persistent companion device (PCD) control configuration system comprising:
a PCD animation configuration system that facilitates controlling expressive output of the PCD through playback of scripted animations, responsive operation of the PCD for events detected by event listeners that are configurable by a user, and a plurality of animation layers that facilitate specifying animation commands;
a PCD behavior configuration system that facilitates controlling mechanical and electronic operation of the PCD;
a PCD gaze orientation configuration system that facilitates determining directional activity of the gaze of the PCD by specifying a target and a gaze PCD functional mode of at least one of single-shot and target-tracking; and
a PCD speech recognition configuration system comprising a plurality of embedded rules for recognizing human speech, and a user interface for customizing rules for recognizing human speech, wherein the human speech is captured by an audio sensor input system of the PCD.
14. The system of claim 13, wherein controlling mechanical and electronic operation of the PCD through robot behavior comprises controlling transitions between animated behaviors, controlling a plurality of animated behaviors in at least one of parallel control and sequential control, and controlling a plurality of child behaviors based on a behavior tree of parent and child behaviors, wherein a child behavior is activated based on one of a switch condition for selecting among the child behaviors and randomly selecting among the child behaviors.
PCT/US2017/025137 2016-03-31 2017-03-30 Persistent companion device configuration and deployment platform WO2017173141A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA3019535A CA3019535A1 (en) 2016-03-31 2017-03-30 Persistent companion device configuration and deployment platform
JP2019502527A JP2019521449A (en) 2016-03-31 2017-03-30 Persistent Companion Device Configuration and Deployment Platform
KR1020187031496A KR102306624B1 (en) 2016-03-31 2017-03-30 Persistent companion device configuration and deployment platform

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662316247P 2016-03-31 2016-03-31
US62/316,247 2016-03-31

Publications (1)

Publication Number Publication Date
WO2017173141A1 true WO2017173141A1 (en) 2017-10-05

Family

ID=59966475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/025137 WO2017173141A1 (en) 2016-03-31 2017-03-30 Persistent companion device configuration and deployment platform

Country Status (4)

Country Link
JP (1) JP2019521449A (en)
KR (1) KR102306624B1 (en)
CA (1) CA3019535A1 (en)
WO (1) WO2017173141A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108928631A (en) * 2018-05-25 2018-12-04 上海优异达机电有限公司 The automatic material distributing device of FPC jigsaw
WO2019133715A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for artificial intelligence driven automated companion
WO2019133689A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for selective animatronic peripheral response for human machine dialogue
WO2019133684A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for personalized and adaptive application management
WO2019133694A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs
WO2019133698A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for personalizing dialogue based on user's appearances
WO2019133710A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for dialogue management
USD888165S1 (en) 2018-11-05 2020-06-23 DMAI, Inc. Robot
WO2020163171A1 (en) * 2019-02-07 2020-08-13 quadric.io, Inc. Systems and methods for implementing a random access augmented machine perception and dense algorithm integrated circuit
CN111787169A (en) * 2020-07-13 2020-10-16 南京硅基智能科技有限公司 Three-party call terminal for mobile man-machine cooperation calling robot
KR20210001798A (en) * 2019-06-29 2021-01-06 주식회사 큐버 smart mirror chatbot system of high-level context awareness by use of adaptive multiple biometrics
USD916161S1 (en) 2018-11-05 2021-04-13 DMAI, Inc. Robot
US11150869B2 (en) 2018-02-14 2021-10-19 International Business Machines Corporation Voice command filtering
CN113744414A (en) * 2021-09-06 2021-12-03 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
US11200890B2 (en) 2018-05-01 2021-12-14 International Business Machines Corporation Distinguishing voice commands
US11238856B2 (en) 2018-05-01 2022-02-01 International Business Machines Corporation Ignoring trigger words in streamed media content
US11331807B2 (en) 2018-02-15 2022-05-17 DMAI, Inc. System and method for dynamic program configuration
US11355108B2 (en) 2019-08-20 2022-06-07 International Business Machines Corporation Distinguishing voice commands
EP4030422A4 (en) * 2019-09-30 2023-05-31 Huawei Technologies Co., Ltd. Voice interaction method and device
EP4144425A4 (en) * 2020-06-24 2024-03-06 Honda Motor Co., Ltd. Behavior control device, behavior control method, and program

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10467062B1 (en) * 2019-03-11 2019-11-05 Coupang, Corp. Systems and methods for managing application programming interface information
KR102271361B1 (en) 2019-11-08 2021-06-30 고려대학교 산학협력단 Device for automatic question answering
JP7446178B2 (en) * 2020-08-05 2024-03-08 本田技研工業株式会社 Behavior control device, behavior control method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6606479B2 (en) * 1996-05-22 2003-08-12 Finali Corporation Agent based instruction system and method
US20090053681A1 (en) * 2007-08-07 2009-02-26 Triforce, Co., Ltd. Interactive learning methods and systems thereof
US20150314454A1 (en) * 2013-03-15 2015-11-05 JIBO, Inc. Apparatus and methods for providing a persistent companion device
US9286711B2 (en) * 2011-09-30 2016-03-15 Microsoft Technology Licensing, Llc Representing a location at a previous time period using an augmented reality display

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4934904B2 (en) * 2000-05-12 2012-05-23 富士通株式会社 Robot cooperation device, robot cooperation program storage medium, and robot cooperation program
JP4670136B2 (en) * 2000-10-11 2011-04-13 ソニー株式会社 Authoring system, authoring method, and storage medium
EP1727605B1 (en) * 2004-03-12 2007-09-26 Koninklijke Philips Electronics N.V. Electronic device and method of enabling to animate an object
JP2007069302A (en) * 2005-09-07 2007-03-22 Hitachi Ltd Action expressing device
FR2946160B1 (en) * 2009-05-26 2014-05-09 Aldebaran Robotics SYSTEM AND METHOD FOR EDIT AND ORDER BEHAVIOR OF MOBILE ROBOT.
EP2974273A4 (en) * 2013-03-15 2018-01-10 Jibo, Inc. Apparatus and methods for providing a persistent companion device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6606479B2 (en) * 1996-05-22 2003-08-12 Finali Corporation Agent based instruction system and method
US20090053681A1 (en) * 2007-08-07 2009-02-26 Triforce, Co., Ltd. Interactive learning methods and systems thereof
US9286711B2 (en) * 2011-09-30 2016-03-15 Microsoft Technology Licensing, Llc Representing a location at a previous time period using an augmented reality display
US20150314454A1 (en) * 2013-03-15 2015-11-05 JIBO, Inc. Apparatus and methods for providing a persistent companion device

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11222632B2 (en) 2017-12-29 2022-01-11 DMAI, Inc. System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs
US20190206393A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for dialogue management
US11024294B2 (en) 2017-12-29 2021-06-01 DMAI, Inc. System and method for dialogue management
WO2019133684A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for personalized and adaptive application management
WO2019133694A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs
WO2019133698A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for personalizing dialogue based on user's appearances
WO2019133710A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for dialogue management
WO2019133715A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for artificial intelligence driven automated companion
US10567570B2 (en) 2017-12-29 2020-02-18 DMAI, Inc. System and method for personalized and adaptive application management
US11504856B2 (en) 2017-12-29 2022-11-22 DMAI, Inc. System and method for selective animatronic peripheral response for human machine dialogue
US11468894B2 (en) 2017-12-29 2022-10-11 DMAI, Inc. System and method for personalizing dialogue based on user's appearances
US11190635B2 (en) 2017-12-29 2021-11-30 DMAI, Inc. System and method for personalized and adaptive application management
CN112074899A (en) * 2017-12-29 2020-12-11 得麦股份有限公司 System and method for intelligent initiation of human-computer dialog based on multimodal sensory input
WO2019133689A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for selective animatronic peripheral response for human machine dialogue
US11150869B2 (en) 2018-02-14 2021-10-19 International Business Machines Corporation Voice command filtering
US11331807B2 (en) 2018-02-15 2022-05-17 DMAI, Inc. System and method for dynamic program configuration
US11238856B2 (en) 2018-05-01 2022-02-01 International Business Machines Corporation Ignoring trigger words in streamed media content
US11200890B2 (en) 2018-05-01 2021-12-14 International Business Machines Corporation Distinguishing voice commands
CN108928631B (en) * 2018-05-25 2024-01-26 上海优异达机电有限公司 Automatic distributing device for FPC jointed boards
CN108928631A (en) * 2018-05-25 2018-12-04 上海优异达机电有限公司 The automatic material distributing device of FPC jigsaw
USD934323S1 (en) 2018-11-05 2021-10-26 DMAI, Inc. Robot
USD916161S1 (en) 2018-11-05 2021-04-13 DMAI, Inc. Robot
USD888165S1 (en) 2018-11-05 2020-06-23 DMAI, Inc. Robot
WO2020163171A1 (en) * 2019-02-07 2020-08-13 quadric.io, Inc. Systems and methods for implementing a random access augmented machine perception and dense algorithm integrated circuit
KR102361038B1 (en) 2019-06-29 2022-02-09 주식회사 큐버 smart mirror chatbot system of high-level context awareness by use of adaptive multiple biometrics
KR20210001798A (en) * 2019-06-29 2021-01-06 주식회사 큐버 smart mirror chatbot system of high-level context awareness by use of adaptive multiple biometrics
US11355108B2 (en) 2019-08-20 2022-06-07 International Business Machines Corporation Distinguishing voice commands
EP4030422A4 (en) * 2019-09-30 2023-05-31 Huawei Technologies Co., Ltd. Voice interaction method and device
EP4144425A4 (en) * 2020-06-24 2024-03-06 Honda Motor Co., Ltd. Behavior control device, behavior control method, and program
CN111787169A (en) * 2020-07-13 2020-10-16 南京硅基智能科技有限公司 Three-party call terminal for mobile man-machine cooperation calling robot
CN113744414A (en) * 2021-09-06 2021-12-03 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
KR20180129886A (en) 2018-12-05
JP2019521449A (en) 2019-07-25
KR102306624B1 (en) 2021-09-28
CA3019535A1 (en) 2017-10-05

Similar Documents

Publication Publication Date Title
US11148296B2 (en) Engaging in human-based social interaction for performing tasks using a persistent companion device
KR102306624B1 (en) Persistent companion device configuration and deployment platform
US20170206064A1 (en) Persistent companion device configuration and deployment platform
US10391636B2 (en) Apparatus and methods for providing a persistent companion device
WO2016011159A9 (en) Apparatus and methods for providing a persistent companion device
Park et al. A metaverse: Taxonomy, components, applications, and open challenges
US20220284896A1 (en) Electronic personal interactive device
US20230092103A1 (en) Content linking for artificial reality environments
US20180229372A1 (en) Maintaining attention and conveying believability via expression and goal-directed behavior with a social robot
Hoffman et al. Design and evaluation of a peripheral robotic conversation companion
RU2690071C2 (en) Methods and systems for managing robot dialogs
Dasgupta et al. Voice user interface design
US20230086248A1 (en) Visual navigation elements for artificial reality environments
WO2016206645A1 (en) Method and apparatus for loading control data into machine device
WO2021007546A1 (en) Computing devices and systems for sending and receiving voice interactive digital gifts
WO2018183812A1 (en) Persistent companion device configuration and deployment platform
Smith Ok, google: designing information architecture for smart speakers
WO2024219336A1 (en) Action control system and robot
WO2024214708A1 (en) Action control system
US20240354641A1 (en) Recommending content using multimodal memory embeddings
Yim Robotic User Interface for Telecommunication
WO2024220287A1 (en) Dynamic model adaptation customized for individual users

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 3019535

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2019502527

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187031496

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17776689

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17776689

Country of ref document: EP

Kind code of ref document: A1