US20050015744A1 - Method for categorizing, describing and modeling types of system users - Google Patents
Method for categorizing, describing and modeling types of system users Download PDFInfo
- Publication number
- US20050015744A1 US20050015744A1 US10/920,201 US92020104A US2005015744A1 US 20050015744 A1 US20050015744 A1 US 20050015744A1 US 92020104 A US92020104 A US 92020104A US 2005015744 A1 US2005015744 A1 US 2005015744A1
- Authority
- US
- United States
- Prior art keywords
- user
- users
- interface
- task
- qualities
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/38—Creation or generation of source code for implementing user interfaces
Definitions
- This invention relates to modeling system users, and more specifically to modeling system users to aid in the design of user interfaces.
- a user model is a representation of the set of behaviors that a user actually exhibits while performing a set of tasks.
- the purpose of user modeling is to build a model of the behaviors used when a user interacts with a system. For example, if the system the user interacts with is a computer system, then the interaction occurs primarily with the computer interface (i.e. keyboard, monitor, mouse, and sound).
- An interface design team may be assembled to gather information on users in order to design a user interface. If the interface design team is emphasizing performance, the behaviors and characteristics that emerge are items related to the expert user. The expert users usually can effectively articulate their suggestions and are normally interested in achieving performance. Therefore, interviewers from the interface design team pay close attention to the comments and suggestions of these expert users. Another aspect for giving credence to the expert user is that experts are usually the people whom get promoted and are likely to be chosen as members on the design team. The problem is, of course, that other types of users do not have the same behaviors and capabilities as these experts and, thus, their needs are not represented in the requirements gathering phase of the interface design. Expert users are typically a smaller percentage of the user population. If the interface is designed for the expert user, this leaves a high percentage of users where the interface is unsuitable or less than optimum.
- the user modeling goal should thus characterize the users in such a way that the designers can incorporate the users' behaviors into the interface design so that performance is maximized (while acknowledging and compensating for the human element).
- the expectation is that the user models would also allow for the prediction of performance after the newly designed interface is operational.
- the style and type of user interface can significantly impact the resulting performance.
- a method is needed to model system users that produces information that can be used in the design of an interface that maximizes the performance of the users, and also allows for the prediction of performance after the newly designed interface is operational.
- the present invention is directed to a method for categorizing, describing, and modeling system users that substantially obviates one or more of the problems arising from the limitations and disadvantages of the related art.
- Another object of the present invention is to provide a method for modeling system users that aids in designing an interface more familiar and comfortable to users because particular components of the interface will be better suited for their particular style.
- the present invention preferably comprises a method for modeling types of system users. Behaviors of a variety of types of users are categorized into two or more groups. Descriptions of the behaviors of each user group are created based on behaviors of selected users from each user group. Models are generated for the described behaviors of each user group. A user interface can then be designed using information from these models. The performance of the variety of types of users is improved when the interface is used by these users.
- the behaviors may include navigation behaviors, parallel processing behaviors, and customer sales behaviors.
- Categorizing may comprise charting the behaviors on a chart having two, three, four, or more dimensions.
- the dimensions may include performance measures, cognitive workload measures, behavioral measures, or user characteristic measures.
- the descriptions of the behaviors of each user group may be related to the similarities within each group or the differences between each group.
- the descriptions of the behaviors of each user group may comprise listing the tasks by frequency and importance and selecting from the most important tasks for detailed task analysis.
- the detailed task analysis may comprise capturing the perceptual, cognitive, and motor stages of human behavior, and quantifying each stage as to processing speed and cognitive load.
- the detailed task analysis may be accomplished by using a modified GOMS methodology.
- the models may include qualitative models which may include how the users within a specific group behave in certain situations, or how the users within a specific group perform certain functions.
- the models may include quantitative models which may incorporate the capability to make numerical performance predictions.
- the models of the behaviors may be constructed in an interactive process that results in the models representing the strategies and activities for each user group.
- the models of the behaviors may be validated, and the validating of the models may use actual data.
- the present invention may also preferably comprise a method for modeling behaviors of interface users where the models are used to provide data for designing a system user interface.
- a list of user behaviors is created. Important behaviors based on the desired goals for the system user interface are identified. Data related to the important behaviors are obtained from a plurality of users. The data is graphed where the axises of the graph may be related to two or more important behaviors of the plurality of users. Clusters in the graphed data are identified, where the clusters represent groups of users with similar important behaviors. At least one user is selected from each user group. Additional data from the selected users is obtained, the additional data related to the selected users' behaviors. The selected users' behaviors are described based on analyzing the additional data. Models of said selected users' behaviors are created based on the descriptions of the selected users' behaviors. A user interface may be created using information from the models. The plurality of users' performance may be improved when using the user interface.
- FIG. 1 is a block diagram of the present invention
- FIG. 2 is a flowchart of the categorization methods according to the present invention.
- FIG. 3 is an exemplary user survey form
- FIG. 4 is a flowchart of the selection of cognitive workload techniques according to the present invention.
- FIG. 5 is a flowchart of the selection of types of subjective workload measures according to the present invention.
- FIG. 6 is table of a NASA-TLX rating scale definition
- FIG. 7 is an exemplary graph of performance and cognitive workload according to the present invention.
- FIG. 8 shows exemplary instructions and definitions for a cognitive workload modified NASA-TLX survey
- FIG. 9 is an exemplary cognitive workload modified NASA-TLX survey
- FIG. 10 shows an exemplary procedure for administering a NASA-TLX survey
- FIG. 11 shows an exemplary procedure for combining and analyzing survey data
- FIG. 12 is a flowchart of GOMS modeling techniques
- FIG. 13 is table of task types and design information used decide on a GOMS technique
- FIG. 14 is a table of exemplary steps for user observation task analysis
- FIG. 15 is a table of exemplary steps for user video task analysis
- FIG. 16 is a table of exemplary steps for user eye tracking task analysis
- FIG. 17A shows an exemplary data file format for key press
- FIG. 17B shows an exemplary data file format for eye movement
- FIG. 17C shows an exemplary data file format for screen display
- FIG. 17D shows an exemplary data file format for screen display objects
- FIG. 18 is an exemplary CPM-GOMS task analysis PERT chart according to the present invention.
- FIG. 19 is an exemplary block diagram from a PERT chart according to the present invention.
- FIG. 20 is an exemplary user qualitative model according to the present invention.
- FIG. 21 shows an exemplary primary simulation outcome
- FIG. 22 is a table of exemplary steps to construct a user model.
- FIG. 23 shows an exemplary procedure to refine a user model.
- FIG. 1 shows a block diagram of the present invention.
- the first activity performed is to create a tentative list of characteristics and behaviors of the users 2 .
- This tentative list is created by identifying the goals desired for the user interface or the user models, and listing expected and desired behaviors that are relevant to these goals.
- the list is then revised to include only those characteristics and behaviors that are important based on the goals.
- the activity of categorizing 4 begins.
- Information is obtained from users regarding their characteristics and behaviors. This information may be obtained from a survey completed by the users, or from some other means.
- Each user's characteristics and behavioral information is then converted to a score or value.
- the users are then mapped or charted based on which behaviors they exhibit.
- the mapping or charting is analyzed to identify clusters of users. These clusters define groups of users that have similar behaviors.
- the user population may be charted on a multidimensional chart and the groupings or clusters emerge from analysis of the chart data.
- the dimensions of the chart are the important behaviors and may include performance measures, cognitive workload measures, behavioral measures, or user characteristic measures.
- the groups are then analyzed to produce descriptions of each group as shown in activity block 6 . This consists of selecting one or more users from each group and obtaining additional behavioral information. This additional behavioral information is analyzed to produce descriptions for each group. These descriptions are then used to formulate models of behaviors 8 for each group. Information from these models can be used to design and create a user interface 10 . The methods and means to accomplish these activities will now be discussed in further detail.
- the present invention may be applied to various types of users of a variety of system interfaces.
- One embodiment will be utilized by being described in detail to illustrate the present invention.
- This embodiment uses the present invention to model system users such as service representatives that interface with customers and use a computer interface to help service their customers' needs.
- this computer interface may be used by service representatives for the purpose of negotiating new services with customers.
- the representative can accomplish the sales and setup of those requested services through the use of the computer interface.
- the next activity 4 is to categorize the behaviors into groups.
- the user population is categorized into several groups.
- the number of groups may range from 3 to 5 groups, however, more than this number would still be within the spirit and scope of the present invention.
- This categorization effort is accomplished based upon similar behavioral characteristics between users that are important to system interface design and use. Just as having a single representation is an oversimplification of the user population, representing each and every user individually is not practical. There are hundreds to thousands of users for some major systems. Therefore, it is a reasonable compromise to group users into 3 to 5 groups and represent the needs of those groups as the user interface needs.
- categorization methods are: user characteristics method 12 , performance characteristics method 14 , behavioral characteristics method 16 , and cognitive workload method 18 .
- An appropriate method is selected based on the types of users, and the goals for the system interface desired.
- a combination of methods may also be used, and still be within the spirit and scope of the present invention, if this is desired based on the users and goals of the user interface.
- User characteristics refer to user qualities or traits that are measurable and differ between users.
- the specific user characteristics that facilitate the categorizing of users may be general or may be task/job dependent.
- the users self-rate their user characteristics.
- a general characteristic that may be used to group users could be their ability to recall information.
- Users who rate themselves as having difficulty recalling the various packages/services offered may benefit from a menu-based interface.
- Menu-based interfaces require less mental demand (memorizing) than other types of interfaces. Users who rate themselves as having no difficulty recalling the various packages/services offered may benefit from an interface where menus can be skipped and shortcut keys can be used.
- FIG. 3 An exemplary survey that captures user characteristics that may facilitate the categorizing of users is shown in FIG. 3 . Questions 1, 2, 3, 6, and 7 are more general user characteristics while questions 4 and 5 are more task/job dependent.
- Performance is also a method to facilitate the categorizing of users.
- four months of performance measures are acquired for these users. These performance measures include: gross dollar sales per month, net dollar sales per month, retention of sales, “cross or up” sales per month, number of orders per month, dollar sales per order, and number of incoming calls per month.
- the number of orders per month may separate the order takers from the rest of the users.
- the order takers are users who, as quickly as possible, set up the package or service the customer has requested. They do not cross-sell other packages or services to the customer; they quickly take an order, hang up, and quickly take an order again. Order takers are expected to have a larger number of orders per month as compared with the other users. However, they may have a lower average of dollar sales per order.
- effort expenditure refers to an operator's ability to manage the system demands. This ability may be continually changing (e.g., physiological readiness, experience and motivation) or may stay relatively constant (e.g., general background, attitude, personality, psychophysical factors).
- the basic properties that any measurement should have are the properties of validity, and reliability. However, because cognitive workload is multidimensional, many other properties are also helpful in determining which measurement to choose. These properties include sensitivity, diagnosticity, global sensitivity, intrusiveness, implementation requirements, operator acceptance, and transferability.
- Sensitivity is a primary property of cognitive workload. It refers to a measurement's ability to detect different degrees of workload imposed by the task or system. The degree of sensitivity required is directly associated with the question to be answered by the workload technique. Two basic questions asked with regards to workload are: (1) is an overload occurring which demonstrates a degradation in operator performance, and (2) is there a potential for such an overload to exist.
- Diagnosticity refers to the ability to discriminate differences in specific resource expenditures, as related to the multiple resources model. For example, a secondary tracking task may demonstrate there is an overload for motor output when an operator is performing a typing task.
- Operator acceptance is important to ensure that a measurement will reflect accurate data. If an operator does not accept the measurement, the measurement could be ignored (e.g., the operator ignores the secondary task or randomly rates the task with a subjective measurement), the operator could perform at a substandard level, or operator workload could increase due to the measurement not due to the task.
- the cognitive workload (mental workload) method 18 is chosen because of its ability to obtain a variety of information.
- any categorization method used would still be within the spirit and scope of the present invention.
- Cognitive workload has been described in several publications. Two examples are O'Donnell, R. D., and Eggemeier, F. T. (1986). Workload assessment methodology, and K. R. Boff, L. Kaufman, and J. Thomas (Eds.), Handbook of perception and human performance: Volume II. Cognitive processes and performance (pp. 42/1-42/49). New York: Wiley.
- FIG. 4 Different types of cognitive workload techniques are shown in FIG. 4 They are subjective measures techniques 20 , performance-based measures techniques 22 , and physiological measures techniques 24 . One or more of these methods is selected, again based on the types of users, and the goals for the system interface desired. For the service representative embodiment, both subjective measures technique and performance-based measures are used. However, any cognitive workload technique, or combination of techniques can be used, and such use would still be within the spirit and scope of the present invention.
- SWAT Subjective Workload Assessment Technique
- NSA-TLX NASA-Task. Load Index
- SWORD Subjective Workload Dominance
- a reason for using subjective measures is that they typically are highly sensitive to detecting overloads. They tend to be globally sensitive and are not intrusive since they are performed after the task is completed. In addition, the implementation requirements are low (e.g., a pencil and paper, possibly some training on the measurement) and operator acceptance is usually high.
- subjective measures are not always diagnostic, especially in facilitating the redesign of a task or system. The few subjective techniques that have some diagnostic abilities are very generalized. Some subjective techniques also have problems with operator acceptance.
- Physiological measures tend to be extremely sensitive, some are highly diagnostic, while others are globally sensitive. However, physiological measures are intrusive, have a high degree of implementation requirements (e.g., for an EEG, an EEG machine, an oscilloscope, and electrodes are needed), and are expected to have low operator acceptance in operational environments.
- Secondary task measures are categorized as either a subsidiary task paradigm or a loading task paradigm.
- secondary task measurements evaluate how much of one or more resources are being consumed by the primary task. Users put emphasis on primary task performance. Secondary tasks are added to the primary task to impose an additional load on the operator. Analyzing performance decrements on secondary tasks determines how much resources are consumed. Properly choosing secondary tasks determines which resources are consumed.
- secondary task measurements determine when and how much the primary task deteriorates. Users put emphasis on secondary task performance while the degree of difficulty of the primary task is manipulated. Two or more primary tasks may also be compared for task deterioration with this paradigm.
- a subjective measure is chosen. As shown in FIG. 5 , a decision must be made to select between various subjective assessment techniques such as the Modified Cooper-Harper 26 , Subjective Workload Assessment Technique (SWAT) 28 , NASA-Task Load Index (NASA-TLX) 30 , or Subjective Workload Dominance (SWORD) 32 . For the service representative embodiment, a NASA-Task Load Index (NASA-TLX) technique is chosen. However, any subjective workload measure chosen would still be within the spirit and scope of the present invention.
- the NASA-TLX evolved from the NASA Bipolar scale. Similar to SWAT, the Bipolar scale was developed with the consideration that workload is multidimensional, thus, a measurement of workload should also be multidimensional. Developed after SWAT, the Bipolar was designed with nine scales because the Bipolar authors did not believe the scales in SWAT were sufficient. The Bipolar also recognizes that from task to task, the scales may vary in importance, and allows users to acknowledge these differences. In addition, this technique was developed to contain diagnostic scales, which could be rated based on subjective importance.
- the NASA-TLX inherited properties from the Bipolar scale with the exception that the NASA-TLX has six scales to allow for an easier implementation.
- the scales represent task characteristics (mental demand, physical demand, and temporal demand), behavioral characteristics (performance and effort), and operator's individual characteristics (frustration). These scales and their corresponding definitions are shown in FIG. 6
- TLX also added the ability to consider individual differences through the weighting of the workload scales.
- TLX involves a two-part procedure consisting of both ratings and weightings. After the operator completes the task, numerical ratings are obtained for each of the six scales. The operator is given both the rating scale definition sheet and a rating sheet. On the rating sheet, there are twenty intervals with endpoint anchors for each of the six scales. Users mark the desired location for each scale. A score from 5 to 100 is obtained on each scale by multiplying the rated value by five. Depending on the situation, rating sheets, verbal responses, or a computerized version are considered practical.
- the ratings of each scale are arranged in a raw ratings column. Adjusted ratings are calculated by multiplying the raw ratings by the corresponding tallied scale scores. The adjusted ratings for all six different scales are then summed. The total sum is divided by 15 (for the number of paired comparisons) to obtain the weighted workload score (ranging from 0 to 100) for the operator in that task condition. Analysis of the data can then be performed.
- TLX Due to the multidimensional properties of workload, some level of diagnosticity may be distinguished by using TLX.
- Generalized conclusions may be made based on operator strategies and on weightings and judgments of the six dimensions of mental demand, physical demand, temporal demand, performance, effort, and frustration.
- TLX is not considered intrusive because it is performed after the task is completed. Implementation requirements are typically low; the definition sheet, rating sheet, and paired comparisons are needed for every operator and task. Some time may be required for users to practice with and familiarize themselves with the scales. Operator acceptance is typically high and TLX is usually transferable.
- TLX was robust against slight (e.g. 15 minutes) delays in operator ratings and in non-controlled order effects. TLX is also considered potentially more sensitive at low levels of workload compared to SWAT, and TLX's paired comparison procedure may be omitted without comprising the measure.
- a goal is to determine how a new system interface should be designed to increase the performance of service representatives. Since a current system interface exists, the question must then be asked, why use a cognitive workload assessment technique to determine how the system interface should be re-designed? Before this question can be answered, the current system interface and how it is used by the service representatives to sell products, services, and packages must be examined.
- a cognitive workload assessment technique will result in one of three possible outcomes: all service representatives are overloaded, none of the service representatives are overloaded, or some of the services representatives are overloaded. If the results are that some or all of the service representatives are overloaded, the system should typically be redesigned to lower the load. If none of the service representatives are overloaded, the system may not necessarily have to be redesigned. It may be cost justified to train different strategies to the service representatives who are performing at a lower level. Thus, a cognitive workload assessment technique will help to determine if the system should be redesigned.
- measuring cognitive workload is an indirect way to measure types of strategies. Furthermore, measuring cognitive workload is quick, easy to perform, and inexpensive. Conversely, determining each service representative's strategy for each task would take a considerable amount of time, would require a lot of effort, and would be very expensive.
- groups of service representatives may be parsed out and a small number of service representatives in each group may be examined for their strategies.
- a graph of cognitive metric and performance metric is expected to help parse out the groups, given the assumption that the degree of cognitive workload and user-characteristics are highly correlated to strategy. An example of this graph is shown in FIG. 7 .
- the Blue Group has higher performance and lower cognitive workload
- the Green Group has medium performance and higher cognitive workload
- the Yellow Group has lower performance and lower cognitive workload. From this data, individuals in each group could be examined for strategies used during their tasks. It is expected that strategies within a group would be similar, but between groups would be different.
- a goal in redesigning the system is to improve performance in as many of the groups as possible.
- the redesign should not trade off one group's improved performance for another group's impaired performance.
- the redesign should also result in an overall improvement in performance compared to the old level of performance. The lack of improved performance would suggest that no group improved because both improvements and detriments had been redesigned into their tasks.
- a cognitive workload assessment technique may provide clues to improve the system.
- the diagnosticity of the technique may give some insight as to where the problems which result in lower levels of performance occur and how to design the system to eliminate such problems. For example, using a multidimensional subjective technique, it may be found that one group feels overloaded on the dimension of mental effort. It may be they feel there is too much to remember, and if the system was redesigned as menu-based to lower the use of memory, the mental effort load would be decreased to a more satisfactory level.
- Some of these techniques may also be used to evaluate the newly designed system. Some of the cognitive workload assessment techniques may be used to determine if the new system decreases the service representatives' loads. Several cognitive workload assessment techniques can be used at any stage of development. A properly chosen technique can signal design problems early in the development of a new system.
- any system interface design effort would be benefited by measuring the cognitive workload of the current tasks. As mentioned previously, this information may help determine if the system needs to be redesigned. In addition, the cognitive workload of a task should not be measured relative to another task; rather it should be an absolute measurement for the system interface design team. This restricts SWORD from being recommended. The measurement should give diagnostic information so that if the system needs to be redesigned, information from the cognitive workload measure will help indicate what should be redesigned and how it should be redesigned. This restricts MCH from being recommended. It is unclear whether the measure needs to be sensitive to low levels of workload. Therefore, SWAT or NASA-TLX could be recommended as the subjective measure for the service representative embodiment.
- NASA-TLX was chosen over SWAT as the recommended technique because sensitivity to low levels of workload may be required. The recommendation is also based on some of NASA-TLX's properties. As compared to SWAT, NASA-TLX is fast and easy to perform. Service representatives will probably have a higher acceptance of it than of SWAT.
- the NASA-TLX is thus seen to be best for the service representative embodiment, it was modified.
- the NASA-TLX currently contains six scales, namely, mental demand, physical demand, temporal demand, performance, effort, and frustration. Since service representatives are not affected by physical demands, this scale was removed from the TLX. In addition, effort is a difficult scale to define. In pre-study testing, effort was confused with mental demand. Therefore, effort was also removed from the TLX. Furthermore, the performance scale was also removed from the TLX since users may view the performance scale as a scale related to their performance reviews.
- the modified TLX for use in the service representative embodiment thus contains the three scales of mental demand, temporal demand, and frustration. Similar to the original NASA-TLX technique, these three scales will be rated and compared. However, service representatives will only perform one rating based on their tasks that day; each task will not be individually rated.
- An exemplary cognitive workload TLX survey is shown in FIGS. 8 and 9 . Instructions and definitions for the survey are shown in FIG. 8 , while the survey is shown in FIG. 9 . Exemplary steps outlining procedures for administering the survey and modified NASA-TLX instrument, using the service representative embodiment, is shown in FIG. 10 .
- the survey data is received from the users, it is combined and analyzed. This includes graphing or charting the data and identifying groupings or clusters on the graph or chart. These groups suggest users with similar behaviors. One or more users from each group is selected. These selected users will undergo a more detailed analysis.
- An exemplary outline of steps and procedures for combining and analyzing the survey data is shown in FIG. 11 .
- a task analysis is a method employed to enable one to understand how a user performs a specific task and to describe this behavior.
- Task analyses allow interface designers an understanding of what must be done to accomplish a task. They may also obtain insight into how a task can be better accomplished and what is needed to better accomplish the task. All of this information facilitates the development of a new system interface.
- task analyses may help system interface requirements development by determining what functionality is necessary or desired in a system interface.
- Functionality refers to those functions in a system that users find useful in accomplishing their tasks.
- functionality together with a well-designed interface should result in a system that is easy to learn and use.
- Behavioral characteristics are user characteristics that are not self-rated and are usually determined through a task analysis. Previously, it was noted that the user characteristics are measured through a survey which is self-rated by the user. Behavioral characteristics are not necessarily known to the user or may not be well communicated. Examples of behavioral characteristics are the user's actual method of navigation and use of serial processing or parallel processing.
- a task analysis where the user is being monitored, may provide more insight into behavioral characteristics since the user is actually performing the task.
- Each action the user performs to accomplish the task is recorded in a task analysis. The record can show when menus are used versus shortcut keys versus other navigational procedures for a user. Different groups of users are expected to use different navigational techniques.
- Serial processing is the ability to perform one action (mental or motor) at a time
- parallel processing is the ability to perform more than one action. Whether a user tends towards serial processing or parallel processing may be best determined in a task analysis. As previously noted, each action that the user performs to accomplish the task is recorded in a task analysis. The analysis record can show when the user is performing one action at a time versus performing a variety of actions at the same time. Different groups of users are expected to use different processing techniques.
- a subset of individuals in each categorized group are observed for behaviors used to perform their tasks.
- Two behaviors employed by the service representatives that made a predominant impact on their job performance were the number of cross-selling attempts made to the customer and the length of the call. For example, representatives who did not make any cross-selling attempts and quickly performed the service requested by the customer, quickly completed a large number of customer sales. This behavior typically resulted in a large number of low revenue calls. While other service representatives talked longer to the customer to determine the most likely types of products or services that they could successfully cross-sell to the customer. This behavior typically resulted in a smaller number of higher revenue calls. Based on the normalized number of cross-selling attempts per call and normalized average length of call, the representatives were grouped by similar behaviors.
- Each of the service representatives observed were charted in a graph where the normalized average number of cross-selling attempts was graphed on the x-axis and the normalized average length of call was graphed on the y-axis. This was done for each observed service representative for a variety of call types (task types). Groups of representatives who were observed to have similar behaviors over different call types were grouped together. These behavior-based groups were then used to validate the cognitive and performance categorized groups.
- the behavior-based groups were also used in the GOMS analyses.
- a GOMS analysis is another type of task analysis that was used to facilitate the description of the different groups of users.
- a GOMS model is a task analysis method that indicates the steps a user must accomplish to complete a task in the form of a model.
- the model can be used to help choose the appropriate functionality for a system.
- the model can also calculate if the task is easy to learn and use.
- GOMS has empirically validated the relationship between human cognition and performance.
- GOMS is based on the Model Human Processor.
- the Model Human Processor is a model of a user interacting with a computer. It can be described by a set of memories and processors together with a set of principles.
- a cognitive processor then handles the information and commands the motor processor to perform physical actions.
- the principles guide how the processors function. This is a simplified model of a user interacting with a computer, but it does facilitate the understanding, predicting, and calculating of a user's performance relevant to human-computer interaction.
- GOMS is an acronym that stands for Goals, Operators, Methods, and Selection rules. GOMS uses these components to model a human's interactions with a computer. Goals refer to the user's goals. What does the user want to accomplish? Goals are typically broken down into subgoals. Operators are the actions the user performs with the computer interface to accomplish the Goals. Examples of Operators are keystrokes, mouse movements, menu selections, etc. Methods are the arrays of subgoals and Operators that perform a Goal. Since GOMS models are not based on novice performance, the Methods are routine. Selection Rules are personal rules users follow to determine which Method to use if more than one Method can accomplish the same Goal. The Goals, Operators, Methods, and Selection rules combine to model how a user performs a task.
- GOMS models cover three general issues. First, they cover lower-level perceptual-motor issues. For example, GOMS discerns the effects of interface arrangement on keystroking or mouse pointing. Second, GOMS models display the complexity and efficiency of the interface procedures. Eventually the user must determine and execute a procedure to perform useful work with the computer system. Third, GOMS models examine these components and how they interrelate in the design of the system.
- GOMS models are approximate and include only the level of detail necessary to analyze the design problem. It is not necessary to have all parts of an analysis examined at the same level. Some design situations may require some areas of analyses to be examined to the level of primitive Operators (Operators at the lowest level of analysis), while other areas may be analyzed with higher-level Operators. GOMS models allow selective analyses.
- a GOMS model will produce quantitative predictions of performance earlier in the development cycle than prototyping and user testing.
- this model will also predict execution time, learning time, errors, and will identify interface components that lead to these predictions. Changes to these interface components will produce quantitative changes to the predictions of performance.
- experienced user is meant to identify users whose performance and behavior has stabilized.
- the behavior and performance of an experienced user is considered to be stabilized to the point that the particular user accomplishes their tasks in the same manner and style for each task execution. This means that their task time would be somewhat consistent. This also means that their error rate would be minimal.
- Stabilized performance also means that the user is not still learning the system, but has established (and repeats) their interaction, behavior, and style. If the user is still learning the system, it would be expected that their task time would improve as they gain additional experience.
- the experienced user has established a defined strategic approach to task completion. That is to say, that the user has worked with the system long enough to adapt where it is appropriate, and selected an approach to task completion that best matches the user's capabilities and style.
- a defined strategic approach to task completion That is to say, that the user has worked with the system long enough to adapt where it is appropriate, and selected an approach to task completion that best matches the user's capabilities and style.
- the expert user and the experienced (lower performance) user may differ in their established task strategies, resulting in different types of user models for the two types of users. These models could then be used to facilitate the design of a system for the different groups.
- the behaviors interacting with the system will be generally similar. Users from each group would be selected and their behavior observed and documented. A CPM-GOMS approach has been modified to describe and document these user behaviors.
- behaviors are examined for similarities within a group and differences between groups of users. For example, a group of expert users may, to different degrees, use parallel processing for their cognitive activities while novices may use serial processing. The emphasis of these descriptions focuses on behaviors that affect performance and are then incorporated into the user models. There are limits to the descriptions, both in time consumption and user knowledge. Only the functions that are frequent and important are formally described.
- GOMS has developed into a family of cognitive modeling techniques.
- the GOMS family contains four techniques, all based on the GOMS concept of Goals, Operators, Methods, and Selection rules. All of the techniques produce quantitative and qualitative predictions of user performance on a proposed system. A decision must be made between the GOMS modeling techniques.
- the techniques are shown in FIG. 12 and include GMN-GOMS 34 , KLM 36 , NGOMSL 38 , and CPM-GOMS 40 . Any of the above, or other, GOMS modeling or task analysis techniques used would still be within the spirit and scope of the present invention.
- serial-based or parallel-based task The types of tasks the users perform are divided into serial-based or parallel-based task.
- Serial Operators can approximate many tasks, such as text editing. If a task can be appropriately represented by serial Operators, a serial processing GOMS technique should be used (CMN-GOMS, KLM, or NGOMSL). However, not all tasks can be approximated by serial Operators. For example, a task in which a service representative is concurrently talking to the customer and typing information into the system is more appropriately represented as tasks occurring in parallel. For these tasks, CPM-GOMS, the parallel processing GOMS technique should be used.
- the types of design information the GOMS technique obtains are divided into functionality, including coverage and consistency, operator sequence, execution time, learning time, and error recovery support.
- Functional coverage refers to the system's ability to provide some reasonably simple and fast Method to accomplish every Goal. After the users' Goals are determined, typically all GOMS Methods can provide functional coverage.
- Functional consistency refers to the system's ability to provide similar Methods to perform similar Goals.
- NGOMSL is the most appropriate technique to use for functional consistency because it employs a consistence measure. This measure is based on learning time predictions in which a consistent interface uses the same Methods throughout for the same or similar Goals, resulting in fewer Methods to be learned.
- Operator sequence refers to whether a technique is capable of predicting the sequence of Operators a user must perform to accomplish a task.
- CMN-GOMS and NGOMSL can predict the sequence of motor Operators a user will execute while KLM and CPM-GOMS must be supplied with the Operators.
- the ability of CMN-GOMS and NGOMSL to predict the sequence of Operators is useful in deciding whether to incorporate a new Method into a system. It is also useful in determining how to optimally incorporate training in the use of the new Method.
- CPM-GOMS does not predict Operator sequence for parallel processes, it can be used to examine the effects of design modifications which may alter Operator sequence.
- NGOMSL Learning time is only provided by NGOMSL. This technique measures the time to learn the Methods in the model and any information required by long-term memory to accomplish the Methods. Since absolute predictions of learning time include many complexities, NGOMSL should be limited to learning time predictions for appropriate comparisons of alternative designs.
- Error recovery support refers to helping users recover from an error once it has occurred.
- GOMS typically recognizes whether the system offers a fast, simple and consistent Method for users to apply when recovering from errors. Any of the GOMS techniques can be used to measure the error recovery support of a system.
- CPM-GOMS Cognitive, Perceptual, and Motor. It performs at a level of analysis in which the user's cognitive, perceptual, and motor activities are included as simple primitive Operators. This GOMS technique allows for parallel processing, unlike the other three GOMS models. Thus, cognitive, perceptual, and motor activities can be performed in parallel as the task demands.
- CPM-GOMS uses a schedule chart (PERT chart) to display the Operators. PERT charts clearly present the tasks which occur in parallel. CPM-GOMS also stands for Critical-Path Method because PERT charts calculate the critical path (the total time) to execute a task. The PERT charts serve as quantitative models as they tell how long certain activities take, and may assign numerical values to tasks. These charts are used by the system interface designers, along with the qualitative models, to design the user interface.
- CPM-GOMS is based on the Model Human Processor (MHP).
- MHP Model Human Processor
- the human is modeled by three processors, namely, a perceptual processor, a cognitive processor, and a motor processor. Although each processor runs serially, processors can run in parallel with each other.
- CPM-GOMS directly utilizes the MHP by recognizing each of the processors that perform the Operators. It also recognizes the sequential dependencies between the processors. Because CPM-GOMS assumes that the user can perform as fast as the MHP can process, the user must be thoroughly experienced in the task. CPM-GOMS does not model novice users.
- the CPM-GOMS technique will be chosen as the basis for the modeling technique to be used.
- the CPM-GOMS technique was chosen because it can most appropriately represent the parallel activities that service representatives engage in while performing their tasks for this embodiment.
- CPM-GOMS performs at a level of analysis in which the user's cognitive, perceptual, and motor activities are included as simple primitive Operators. Since GOMS, in general, includes only the level of detail necessary to analyze the design problem, simple primitive Operators will only be examined when necessary for this project. It is expected that discrete groups of users will show behavioral and performance differences in the accomplishment of a task at a level of analysis higher than simple primitive Operators.
- the CPM-GOMS technique will incorporate the perceptual and cognitive processes together as one general processor. Thus, any internal Operators will be labeled as general mental Operators.
- the combining of perpetual and cognitive Operators into one classification was developed to simplify the models and to maintain a level of analysis higher than primitive Operators.
- the CPM-GOMS uses a PERT chart to display the Operators, it can calculate the critical path of the task.
- the critical path is the sequence of Operators that produce the longest path through the chart. The sum of the sequence of these Operators equals the total time to execute the task.
- Empirical data from actual performance of observable motor Operators will be used in the PERT chart. Both empirical data and data from cognitive psychology will be used to determine execution times of the mental Operators.
- execution times are helpful to determine the overall time of execution per processor/category for each task.
- the service representative's execution time may be divided into 48% talking (motor operators: verbal responses), 39% listening (mental operators), 8% waiting for the system (system response), and 5% typing (motor operators: hand movements).
- the execution times are also instrumental when comparing different tasks, the same task in a different system, or the performance of different types of users.
- CPM-GOMS Another reason CPM-GOMS technique was chosen relates to its ability to compare alternative designs that aren't currently built or prototyped.
- the CPM-GOMS models of the existing system can determine the effects of changes to current tasks, which in turn can facilitate the development of the proposed system.
- the models can also be used as baseline models for the proposed (redesigned) system.
- the models can be compared with models of the proposed system to examine the efficiency and consistency of the proposed system and the ability for users to convert to the proposed system.
- the CPM-GOMS technique can also be used to help develop the documentation and training material.
- documentation should be task-oriented.
- GOMS provides a theory-based empirically validated, and systematic approach to determine the necessary content of task-oriented documentation and training material.
- CPM-GOMS can predict execution time differences between different Methods, the most efficient Methods and Selection rules could be highlighted in the documentation and users could be educated and encouraged to adopt these in training sessions.
- the CPM-GOMS technique used will document four major categories, namely, system response, customer response, mental operators, and motor operators.
- Mental operators include perceptual and cognitive Operators. Motor operators are divided into manual movements, verbal responses, and eye movements.
- a cell is a categorized event at an appropriate level of detail.
- Customer response, motor operators, and system response will be recorded in the cells when the user is performing the task.
- Mental operators will be recorded in the cells after the completion of the task.
- Mental operators will be added based on empirically validated theories in cognitive psychology and input from the users.
- the cells are connected to each other based on their precursor needs. Next, execution times are included for each cell and then the CPM-GOMS model emerges.
- the level “A” task analysis will be when design team members are trained to capture observed behavior while sitting and recording the service representative's actions.
- the level “B” task analysis will be captured primarily through videotape and keystroke data.
- the level “C” task analysis will be by capturing key strokes, user attention through eye tracking, and through video tape.
- level “A” is focused on team members capturing behavior that can be observed while sitting next to the service representative and recording their actions manually. The steps of this level are shown in FIG. 14 .
- level “B” is focused on team members capturing behavior that can be observed and recorded on video while the service representative is in their normal environment or while in a laboratory environment.
- the recorded information will be video and audio.
- the steps of this level are shown in FIG. 15 .
- level “C” is focused on team members capturing behavior that can be observed and recorded on video and, most importantly where the service representative is looking or searching, while the service representative is in a laboratory environment. Although the recorded information will be video and audio, the focus for this level is on the eye tracking information and data. The steps of this level are shown in FIG. 16 .
- the information and data that is gathered during this level of task analysis can be very useful during the design phase of systems development.
- One particular eye tracking data set is the characteristic search patterns of the users. How the users search the screen provides some excellent insights into how the users process the information being presented on the screen.
- This information and data from the eye tracking analysis assists in describing how users (actually groups or categories of users, e.g., blue group) perform tasks and how the interface assists them in performing those tasks.
- the sequence and duration of eye movements provides valuable information into how the user is using the information on the screen to make decisions.
- the sequence or pattern of eye movements is an indication of the strategy that the user is employing.
- the duration of eye movements is an indication of the user showing attention to that particular item of detail. This knowledge of strategy and attention is important to constructing a model of user behavior, i.e., how the blue group accomplishes their tasks.
- This level of task analysis is at the lowest level of detail which thus generates a lot of data.
- One of the potential problems with this large amount of time-oriented data is keeping the various data files properly registered. This time registration problem is compensated by, at the start and at the end, having the user perform a unique task that can be identified in all of the data files. An example of this unique task would be looking at the lower left part of the screen while pressing the “w” key for a period of five seconds.
- the various data files to be discussed are key press, eye movement, video, audio, screen display, and screen display objects.
- the “key press” file needs to capture which keys were pressed, when they were pressed, and mouse movements. The mouse movements would only record location (the X and Y location) when the mouse is clicked or double clicked.
- the “eye movement” file needs to capture the location and duration of eye fixations. These eye fixations have a minimum duration threshold of 50 milliseconds so that simple eye movements from one location to another are not included in the file.
- the “video” file needs to capture the hand movements, screen display changes, notes taken by the user, and user facial expressions. This video would allow for 20 millisecond frame increments.
- the file format is the normal videotape format with time stamps.
- the “audio” file needs to capture the voice, speech, and sound from the user, customer, computer, and other devices.
- the file format is the normal audiotape format with time stamps.
- the “screen display” file needs to capture the image that was displayed on the screen for the user to observe.
- the screen image is only recorded when the image changes.
- the screen capture file will be in the normal format.
- the “screen display objects” file needs to capture the location of the various objects that comprise the screen display.
- FIGS. 17A, 17B , 17 C, and 17 D Exemplary data file formats for key press, eye movement, screen display, and screen display objects are shown in FIGS. 17A, 17B , 17 C, and 17 D respectively.
- the output from the describing the behaviors activity is a PERT chart as shown in FIG. 18 .
- This PERT chart is then used to create the block flow diagram in FIG. 19 . Both of these are used in creating the user behavior models.
- a user model can be constructed. There are two types or levels of models: qualitative and quantitative. Both of these model types are useful to the system interface design team and accomplish different objectives.
- the qualitative models are statements of how users within a specific user group behave in certain situations or performing certain functions. For example, a qualitative statement for a user may be “This type of user has a great desire to navigate within the system quickly and is capable and willing to memorize short-cut keys in order to jump from screen to screen quickly.” These qualitative models make a contribution to the design team by allowing design team participants to specifically represent each of the various user groups in the design process. As the various design decisions are addressed, these insights are extremely valuable, so that the various user group needs are not lost in the development process.
- the quantitative models also represent the behavior of a specific user group, but in a manner that has greater detail and with greater precision.
- the quantitative models are more formal and incorporate the capability to make numerical performance predictions unlike the qualitative models. These models make use of programming languages, which are well suited for such representation. Items such as arrival patterns, resources allocations, task duration times are also included and combined with the flow of incoming work. These quantitative models are developed only to the degree of detail necessary to adequately represent the user groups for the design team during the system development process.
- having a large inconsistent user group is the situation that is common today where the user is viewed as one perspective.
- This large, inconsistent user group is not desireable, but rather a smaller, consistent set of behaviors within a group is desired.
- the model that is then generated can be relatively tight. However, if a given group or category is allowed to be loosely grouped or loosely defined by including a broad set of behavioral characteristics, then the set of behavioral descriptions will be correspondingly broad and the model that is generated will be relatively loose.
- model layout is similar to the description layout.
- the behavioral description has four categories of Service Representative Verbal, Service Representative Motor, Service Representative Cognitive, and Customer. These four categories will be mapped to the model construction also.
- the format of the model construction is shown in FIG. 20 .
- the flow of the user model in FIG. 20 begins at the “create” box in the upper left corner.
- This “create” box represents the arrival of a phone call to this service representative.
- Other parts of the model not shown, describe and represent the behavior of the arrival of phone calls.
- Phone calls from customers have been shown to form a pattern of arrivals dependent upon time of day, day of week, day of month, and month of year.
- Each customer phone call has a set of attributes or characteristics.
- An example of an attribute is the type of call.
- An example of call type is a customer wanting to disconnect their service because they are moving to Florida.
- the processing of this call type begins by the service representative answering the phone call, “SR Talks”, with a greeting “Southwestern Bell Telephone. How may I help you?” (SR is the service representative).
- This action is represented with the action box in the lower left corner.
- the notation below this box, and the action boxes also, is a calculation for the duration of this specific action.
- the action's duration is how much time it will take to accomplish this action.
- the “TRIA” function is a common distribution used in simulation that has a minimum, mode, and maximum time period. In this case, these are represented by variables of SRV1, SRV2, and SRV3, respectively.
- the service representative After the service representative asks what the customer wants, the customer replies, “Cust. Talks”, that they wish to disconnect their service. Again, the TRIA function denotes the length of time that this action by the customer requires.
- the service representative Upon hearing the customer's request, the service representative makes a mental decision, “SR Decides”, as to what operational function will accomplish the customer's request.
- the first item of information to begin the disconnect function is the customer's name (and phone number), so the service representative requests, “SR Talks”, that information.
- the service representative moves their mouse, “SR Mouses”, to the “disconnect” button on their screen and clicks the mouse button. The remaining tasks to complete the disconnect function are not illustrated.
- the user model is a series of action boxes, which, by their sequence and delay time, represent how a user will interact with a customer who wishes to disconnect their service. It is important to note that this flow illustrates how one group of users would handle this type of call.
- a different user group who has a reference of shortcut or “hot” keys rather than using the mouse would have a different user model.
- their user model would have a box of “SR Keys”. The significance of this difference is that the time duration for a “SR Keys” action is shorter than the time duration for a “SR-Mouses” action.
- the user models that will be initially constructed will be verified and validated to a certain extent.
- the models can be improved dramatically by using a refining stage after the initial models are constructed.
- the initial models will appear to be valid to, possibly, a degree that may be acceptable.
- it is important to note that the model's validity, for almost all models, will be improved by using the procedure discussed later. It is extremely useful to encourage the user model designers to enhance the validity for a potentially small price (in terms of time and effort).
- One purpose of constructing a user model is to have the ability to predict user behaviors in the future.
- the second purpose focuses on operational performance. In most cases, the impact of a new system design on overall business process cannot be predicted. Of course, operations management is very concerned with the impact of a new, or re-designed, system on overall performance. In fact, most system development projects are justified on improving overall performance. It is important that the design of these new systems support the improvement of performance metrics.
- FIG. 21 One example of a primary simulation outcome is shown in FIG. 21 .
- the data in FIG. 21 is for illustration purposes only and does not represent actual results.
- FIG. 21 One use of the information in FIG. 21 is an illustration of where time is consumed by call category. For example, for a Disconnect call type, 83% of the time is consumed with talking (both customer and service representative). Ten percent of the time is consumed with system response, 4% of the time is consumed with keying activity, and 3% of the time is consumed with reading information from the screen. For this call type, the benefits of spending resources towards reducing the keying time would probably be limited because keying is only 4% of the total time. However, investigating the sub-activities within the talking function could have dramatic impact of this call type.
- a 24% improvement in the keying function would represent a 1% (25% of 4%) overall improvement in time duration, while a 25% improvement in the talking function would represent a 21% (25% of 83%) overall improvement in time duration.
- the usefulness of this type of information (that is contained in FIG. 21 ) is increased when the team is aware of effective resource allocation. These type of simulation outcomes can be an excellent source of guidance for the design team.
- FIG. 22 A table showing exemplary steps to construct the user model with CPM-GOMS flow diagram, using the service representative embodiment, is shown in FIG. 22 . After the model is developed, it should be refined. An exemplary procedure to refine the model, using the service representative embodiment, is shown in FIG. 23 .
- the present invention has been illustrated using an embodiment of the service representative using a computer interface, but is not limited by this embodiment.
- This invention can be applied to modeling any type user for any type of user interface, such as airplane cockpits, machine control panels, motor vehicle dashboards, boat control panels, as well as computer graphical user interfaces.
- This invention is not limited by these, but is meant to cover these and other applications or embodiments that are within the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A method of designing a user interface based on a list of user qualities and interactions of users. The method includes categorizing the users into groups based on at least one of user characteristics, performance characteristics, behavioral characteristics, and cognitive workload. The method further including designing the interface based upon the categorized groups and on goals for the user interface.
Description
- This application is a continuation of U.S. patent application Ser. No. 10/134,430, filed on Apr. 30, 2002, which is a continuation of U.S. patent application Ser. No. 09/089,403, filed on Jun. 3, 1998, the content of which are incorporated herein by reference in their entireties.
- 1. Field of the Invention
- This invention relates to modeling system users, and more specifically to modeling system users to aid in the design of user interfaces.
- 2. Description of the Related Art
- A user model is a representation of the set of behaviors that a user actually exhibits while performing a set of tasks. The purpose of user modeling is to build a model of the behaviors used when a user interacts with a system. For example, if the system the user interacts with is a computer system, then the interaction occurs primarily with the computer interface (i.e. keyboard, monitor, mouse, and sound).
- An interface design team may be assembled to gather information on users in order to design a user interface. If the interface design team is emphasizing performance, the behaviors and characteristics that emerge are items related to the expert user. The expert users usually can effectively articulate their suggestions and are normally interested in achieving performance. Therefore, interviewers from the interface design team pay close attention to the comments and suggestions of these expert users. Another aspect for giving credence to the expert user is that experts are usually the people whom get promoted and are likely to be chosen as members on the design team. The problem is, of course, that other types of users do not have the same behaviors and capabilities as these experts and, thus, their needs are not represented in the requirements gathering phase of the interface design. Expert users are typically a smaller percentage of the user population. If the interface is designed for the expert user, this leaves a high percentage of users where the interface is unsuitable or less than optimum.
- In some design projects, ease of learning, training, or novice aspects are emphasized to a great extent. This is particularly true when a trainer is in a lead position on the design team or when management places a high priority on reducing the costs of training. However, having the novices' needs be dominant in the interface design phase is no better than permitting the experts' needs to be dominant. One group is still being used for the design to the exclusion of the other group's needs. Novices generally also comprise a very small percentage of the user population. Therefore, designing an interface just for the novice user may improve their performance, but may jeopardize overall performance of other users.
- If behaviors of users were condensed into a single set of behaviors, the set definition would be so wide and variable that it would have a limited contribution to the interface designers. That is, the characterization of the users would be so broad, that the designers could not determine what interface options would make a difference in the users' performance.
- If there is no overwhelming performance issue or training issue that directs the team, then anecdotal behavioral information is obtained for a variety of users. User requirements information is usually gathered by more than one person from the design team. Thus, a great deal of discussion ensues following the information gathering on users because each gatherer may have interviewed a different user who probably had different capabilities and a different view of the system and a different set of needs. Therefore, the resulting set of user requirements is a composite or average view of the user needs. In this situation, many of the needs of users do indeed surface but they are not organized in a manner that is intuitively obvious. Also, interface designs to meet these needs are not necessarily optimally beneficial to any one group of users. This method of designing an interface for the composite or average user thus presents a substantial risk that very few users will be fully accommodated by the interface.
- Another current practice is that if users are categorized, they are done so on an informal basis, based primarily on the opinion and judgment of the local operating management. Even though these individual users may be identified, their needs are mixed in with the needs of other users without regard as to the group they represent. Also, with current practice, the descriptions of user behavior are done anecdotally, not statistically. Quantitative performance results are not incorporated into the behavioral descriptions. User models are generally not constructed primarily because there is only one user representation and all of the design team members think they know the needs of the single user.
- The user modeling goal should thus characterize the users in such a way that the designers can incorporate the users' behaviors into the interface design so that performance is maximized (while acknowledging and compensating for the human element). The expectation is that the user models would also allow for the prediction of performance after the newly designed interface is operational. The style and type of user interface can significantly impact the resulting performance.
- Therefore, a method is needed to model system users that produces information that can be used in the design of an interface that maximizes the performance of the users, and also allows for the prediction of performance after the newly designed interface is operational.
- Accordingly, the present invention is directed to a method for categorizing, describing, and modeling system users that substantially obviates one or more of the problems arising from the limitations and disadvantages of the related art.
- It is an object of the present invention to provide a method that accurately categorizes, describes, and models a user's behavior while interacting with a system.
- It is a further object of the present invention to provide a method for modeling system users that provides qualitative and quantitative models.
- It is also an object of the present invention to provide a method for modeling types of system users that allows for the prediction of performance after the new user interface is operational.
- Another object of the present invention is to provide a method for modeling system users that aids in designing an interface more familiar and comfortable to users because particular components of the interface will be better suited for their particular style.
- The foregoing objects are achieved by the present invention that preferably comprises a method for modeling types of system users. Behaviors of a variety of types of users are categorized into two or more groups. Descriptions of the behaviors of each user group are created based on behaviors of selected users from each user group. Models are generated for the described behaviors of each user group. A user interface can then be designed using information from these models. The performance of the variety of types of users is improved when the interface is used by these users.
- The behaviors may include navigation behaviors, parallel processing behaviors, and customer sales behaviors. Categorizing may comprise charting the behaviors on a chart having two, three, four, or more dimensions. The dimensions may include performance measures, cognitive workload measures, behavioral measures, or user characteristic measures.
- The descriptions of the behaviors of each user group may be related to the similarities within each group or the differences between each group. The descriptions of the behaviors of each user group may comprise listing the tasks by frequency and importance and selecting from the most important tasks for detailed task analysis. The detailed task analysis may comprise capturing the perceptual, cognitive, and motor stages of human behavior, and quantifying each stage as to processing speed and cognitive load. The detailed task analysis may be accomplished by using a modified GOMS methodology.
- The models may include qualitative models which may include how the users within a specific group behave in certain situations, or how the users within a specific group perform certain functions. The models may include quantitative models which may incorporate the capability to make numerical performance predictions. The models of the behaviors may be constructed in an interactive process that results in the models representing the strategies and activities for each user group. The models of the behaviors may be validated, and the validating of the models may use actual data.
- The present invention may also preferably comprise a method for modeling behaviors of interface users where the models are used to provide data for designing a system user interface. A list of user behaviors is created. Important behaviors based on the desired goals for the system user interface are identified. Data related to the important behaviors are obtained from a plurality of users. The data is graphed where the axises of the graph may be related to two or more important behaviors of the plurality of users. Clusters in the graphed data are identified, where the clusters represent groups of users with similar important behaviors. At least one user is selected from each user group. Additional data from the selected users is obtained, the additional data related to the selected users' behaviors. The selected users' behaviors are described based on analyzing the additional data. Models of said selected users' behaviors are created based on the descriptions of the selected users' behaviors. A user interface may be created using information from the models. The plurality of users' performance may be improved when using the user interface.
- Additional features and advantages of the present invention will be set forth in the description to follow, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the methods particularly pointed out in the written description and claims hereof together with the appended drawings.
- It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory, and are intended to provide further explanation of the invention as claimed.
- The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrating one embodiment of the invention. The drawings, together with the description, serve to explain the principles of the invention.
- The present invention is illustrated by way of example, and not by way of limitation, by the figures of the accompanying drawings in which like reference numerals refer to similar elements, and in which:
-
FIG. 1 is a block diagram of the present invention; -
FIG. 2 is a flowchart of the categorization methods according to the present invention; -
FIG. 3 is an exemplary user survey form; -
FIG. 4 is a flowchart of the selection of cognitive workload techniques according to the present invention; -
FIG. 5 is a flowchart of the selection of types of subjective workload measures according to the present invention; -
FIG. 6 is table of a NASA-TLX rating scale definition; -
FIG. 7 is an exemplary graph of performance and cognitive workload according to the present invention; -
FIG. 8 shows exemplary instructions and definitions for a cognitive workload modified NASA-TLX survey; -
FIG. 9 is an exemplary cognitive workload modified NASA-TLX survey; -
FIG. 10 shows an exemplary procedure for administering a NASA-TLX survey; -
FIG. 11 shows an exemplary procedure for combining and analyzing survey data; -
FIG. 12 is a flowchart of GOMS modeling techniques; -
FIG. 13 is table of task types and design information used decide on a GOMS technique; -
FIG. 14 is a table of exemplary steps for user observation task analysis; -
FIG. 15 is a table of exemplary steps for user video task analysis; -
FIG. 16 is a table of exemplary steps for user eye tracking task analysis; -
FIG. 17A shows an exemplary data file format for key press; -
FIG. 17B shows an exemplary data file format for eye movement; -
FIG. 17C shows an exemplary data file format for screen display; -
FIG. 17D shows an exemplary data file format for screen display objects; -
FIG. 18 is an exemplary CPM-GOMS task analysis PERT chart according to the present invention; -
FIG. 19 is an exemplary block diagram from a PERT chart according to the present invention; -
FIG. 20 is an exemplary user qualitative model according to the present invention; -
FIG. 21 shows an exemplary primary simulation outcome; -
FIG. 22 is a table of exemplary steps to construct a user model; and -
FIG. 23 shows an exemplary procedure to refine a user model. - The present invention integrates the activities of categorizing, describing, and modeling into one single consistent approach.
FIG. 1 shows a block diagram of the present invention. The first activity performed is to create a tentative list of characteristics and behaviors of theusers 2. This tentative list is created by identifying the goals desired for the user interface or the user models, and listing expected and desired behaviors that are relevant to these goals. The list is then revised to include only those characteristics and behaviors that are important based on the goals. Then, the activity of categorizing 4 begins. Information is obtained from users regarding their characteristics and behaviors. This information may be obtained from a survey completed by the users, or from some other means. Each user's characteristics and behavioral information is then converted to a score or value. The users are then mapped or charted based on which behaviors they exhibit. The mapping or charting is analyzed to identify clusters of users. These clusters define groups of users that have similar behaviors. The user population may be charted on a multidimensional chart and the groupings or clusters emerge from analysis of the chart data. The dimensions of the chart are the important behaviors and may include performance measures, cognitive workload measures, behavioral measures, or user characteristic measures. - The groups are then analyzed to produce descriptions of each group as shown in
activity block 6. This consists of selecting one or more users from each group and obtaining additional behavioral information. This additional behavioral information is analyzed to produce descriptions for each group. These descriptions are then used to formulate models ofbehaviors 8 for each group. Information from these models can be used to design and create auser interface 10. The methods and means to accomplish these activities will now be discussed in further detail. - The present invention may be applied to various types of users of a variety of system interfaces. One embodiment will be utilized by being described in detail to illustrate the present invention. This embodiment uses the present invention to model system users such as service representatives that interface with customers and use a computer interface to help service their customers' needs. For example, this computer interface may be used by service representatives for the purpose of negotiating new services with customers. When a customer calls the service representative and requests new or additional services, the representative can accomplish the sales and setup of those requested services through the use of the computer interface.
- Categorizing User Behavior
- As shown in
FIG. 1 , after theactivity 2 of listing the behaviors of all users, thenext activity 4 is to categorize the behaviors into groups. The user population is categorized into several groups. Preferably, the number of groups may range from 3 to 5 groups, however, more than this number would still be within the spirit and scope of the present invention. This categorization effort is accomplished based upon similar behavioral characteristics between users that are important to system interface design and use. Just as having a single representation is an oversimplification of the user population, representing each and every user individually is not practical. There are hundreds to thousands of users for some major systems. Therefore, it is a reasonable compromise to group users into 3 to 5 groups and represent the needs of those groups as the user interface needs. - Four methods used to categorize users, although others may be used, are shown in
FIG. 2 . These categorization methods are:user characteristics method 12,performance characteristics method 14,behavioral characteristics method 16, andcognitive workload method 18. An appropriate method is selected based on the types of users, and the goals for the system interface desired. A combination of methods may also be used, and still be within the spirit and scope of the present invention, if this is desired based on the users and goals of the user interface. - User characteristics refer to user qualities or traits that are measurable and differ between users. The specific user characteristics that facilitate the categorizing of users may be general or may be task/job dependent. In this possible method for grouping users, the users self-rate their user characteristics. Using the service representative embodiment as an example, a general characteristic that may be used to group users could be their ability to recall information. Users who rate themselves as having difficulty recalling the various packages/services offered may benefit from a menu-based interface. Menu-based interfaces require less mental demand (memorizing) than other types of interfaces. Users who rate themselves as having no difficulty recalling the various packages/services offered may benefit from an interface where menus can be skipped and shortcut keys can be used.
- Task/job dependent user characteristics may also facilitate the categorizing of users. Using the service representative embodiment, users who indicate that customers almost never give clues that they will purchase additional products or services may prefer an interface that prompts them to cross-sell additional products or services. However, users who indicate that customers almost always give clues that they will purchase additional products or services may prefer an interface that does not prompt them to cross-sell. These users probably already have strategies that enable them to cross-sell successfully.
- An exemplary survey that captures user characteristics that may facilitate the categorizing of users is shown in
FIG. 3 .Questions questions - Performance is also a method to facilitate the categorizing of users. For the service representative embodiment, four months of performance measures are acquired for these users. These performance measures include: gross dollar sales per month, net dollar sales per month, retention of sales, “cross or up” sales per month, number of orders per month, dollar sales per order, and number of incoming calls per month.
- The number of orders per month may separate the order takers from the rest of the users. The order takers are users who, as quickly as possible, set up the package or service the customer has requested. They do not cross-sell other packages or services to the customer; they quickly take an order, hang up, and quickly take an order again. Order takers are expected to have a larger number of orders per month as compared with the other users. However, they may have a lower average of dollar sales per order.
- Cognitive workload measurement is comprised of the demands affecting the human operator throughout transfer and transformation of inputs into outputs. Similarly, workload has been defined as the proportion of information processing capacity, resources, or effort, which are expended in meeting system demands. Three major concepts define the framework for workload assessment and include system demands, processing resources and effort expenditure, and operator and system performance.
- System demands are defined as environmental, situational, and procedural. Environmental demands consist of temperature, humidity, noise, illumination, etc. Situational demands are the characteristics and arrangement of displays and controls, the dynamics of a vehicle, etc. Procedural demands are the duration of a task, standard system operating procedures, special instructions given to the operator, etc.
- Processing resources and effort expenditure are indicative of internal abilities of an operator. Processing resources refer to an operator's ability to receive and process the system demands. The multiple resources theory is used to determine how an operator processes information. According to this theory, rather than having a single resource, an operator's processing system consists of several separate capacities or resources that are not interchangeable. In addition, according to this theory, there are three stages of processing, namely an encoding stage, a central processing stage, and a responding stage. There are also two modalities of processing (visual and auditory), two codes of processing (verbal and spatial), and two types of responses (vocal and manual).
- Similar to the processing resources, effort expenditure refers to an operator's ability to manage the system demands. This ability may be continually changing (e.g., physiological readiness, experience and motivation) or may stay relatively constant (e.g., general background, attitude, personality, psychophysical factors).
- Training also affects processing resources and effort expenditure. Although there are no comprehensive theories on how training/practice affect workload, research has shown automatic behavior to decrease cognitive workload. By increasing the levels or amount of practice, workload decreases. The increased levels of practice can lead to automatic behavior. This type of behavior does not appear to require conscious use of processing resources or effort expenditure on the part of the operator.
- The effects of training on performance and workload typically result from changes in the manner the task is performed. Such changes may include a transformation to filter out unnecessary data, the application of increasingly effective coding techniques, and the evolution of internal models to allow perceptual anticipation and motor programming. Thus, operator strategy also affects processing resources and effort expenditure. Operator skill (the operator's ability to choose the appropriate strategy) also may affect processing resources and effort expenditure.
- This demonstrates that workload is a multidimensional construct. The multidimensional aspects are reflected in the multidimensional elements themselves and in the interaction of these elements to determine a load. The implications of this conceptual framework are that no single measure of workload may be adequate, rather a plurality of measures may be required to assess workload. In addition, a variety of workload assessment techniques are required to assess each major factor or component of workload. Before a workload assessment technique is chosen, a number of properties for evaluating workload measurements will be discussed.
- The basic properties that any measurement should have are the properties of validity, and reliability. However, because cognitive workload is multidimensional, many other properties are also helpful in determining which measurement to choose. These properties include sensitivity, diagnosticity, global sensitivity, intrusiveness, implementation requirements, operator acceptance, and transferability.
- Sensitivity is a primary property of cognitive workload. It refers to a measurement's ability to detect different degrees of workload imposed by the task or system. The degree of sensitivity required is directly associated with the question to be answered by the workload technique. Two basic questions asked with regards to workload are: (1) is an overload occurring which demonstrates a degradation in operator performance, and (2) is there a potential for such an overload to exist.
- Diagnosticity refers to the ability to discriminate differences in specific resource expenditures, as related to the multiple resources model. For example, a secondary tracking task may demonstrate there is an overload for motor output when an operator is performing a typing task.
- Global sensitivity refers to a measurement's capability to detect changes in workload without clearly defining why the change is occurring in workload. A globally sensitive measure cannot discriminate differences in specific resource expenditures.
- Intrusiveness is a measurement's ability to interfere with the primary task. An intrusive task may not affect performance (an operator who is not overloaded may compensate) but it may affect the workload measure (an operator may have felt more workload, heart rate may have increased, etc.). In an operational environment, this property is extremely important to control, otherwise, operator performance may decrease and operator workload may increase due to the chosen workload measurement technique and not to changes in the task.
- Implementation requirements include any equipment, instruments, and software that are necessary to present the task. It also includes data collection procedures and any operator training that is necessary for proper use of the measurement.
- Operator acceptance is important to ensure that a measurement will reflect accurate data. If an operator does not accept the measurement, the measurement could be ignored (e.g., the operator ignores the secondary task or randomly rates the task with a subjective measurement), the operator could perform at a substandard level, or operator workload could increase due to the measurement not due to the task.
- Transferability is the ability of a measurement to be utilized in a variety of applications. Transferability is based on the specific measurement and task or system to be measured. For example, a tracking task (i.e. one where a specific user action is monitored) may be transferable, but only to a system that will be measured with a secondary task, which focuses on visual, spatial, and manual skills.
- For the service representative embodiment of the present invention, the cognitive workload (mental workload)
method 18 is chosen because of its ability to obtain a variety of information. However, any categorization method used would still be within the spirit and scope of the present invention. Cognitive workload has been described in several publications. Two examples are O'Donnell, R. D., and Eggemeier, F. T. (1986). Workload assessment methodology, and K. R. Boff, L. Kaufman, and J. Thomas (Eds.), Handbook of perception and human performance: Volume II. Cognitive processes and performance (pp. 42/1-42/49). New York: Wiley. - Different types of cognitive workload techniques are shown in
FIG. 4 They aresubjective measures techniques 20, performance-basedmeasures techniques 22, andphysiological measures techniques 24. One or more of these methods is selected, again based on the types of users, and the goals for the system interface desired. For the service representative embodiment, both subjective measures technique and performance-based measures are used. However, any cognitive workload technique, or combination of techniques can be used, and such use would still be within the spirit and scope of the present invention. - For subjective measures, users are required to judge and report the level of workload experienced during the performance of a specific task or system. These measures are usually based on rating scales. Theoretically, the operator can accurately report an increase in effort or capacity expenditure associated with subjective feelings. Some of the more researched subjective assessment techniques include the Modified Cooper-Harper, Subjective Workload Assessment Technique (SWAT), NASA-Task. Load Index (NASA-TLX), and Subjective Workload Dominance (SWORD).
- A reason for using subjective measures is that they typically are highly sensitive to detecting overloads. They tend to be globally sensitive and are not intrusive since they are performed after the task is completed. In addition, the implementation requirements are low (e.g., a pencil and paper, possibly some training on the measurement) and operator acceptance is usually high. However, subjective measures are not always diagnostic, especially in facilitating the redesign of a task or system. The few subjective techniques that have some diagnostic abilities are very generalized. Some subjective techniques also have problems with operator acceptance.
- Physiological measures examine the physiological response to the task requirements. Typically, users who experience cognitive workload, display changes in a variety of physiological functions. Some of the physiological measurements of workload include heart rate, heart rate variability, sinus arrhythmia, EEGs, ERPs, and eye blink.
- Physiological measures tend to be extremely sensitive, some are highly diagnostic, while others are globally sensitive. However, physiological measures are intrusive, have a high degree of implementation requirements (e.g., for an EEG, an EEG machine, an oscilloscope, and electrodes are needed), and are expected to have low operator acceptance in operational environments.
- Performance-based measures are broken down into primary task measures and secondary task measures. Primary task measures evaluate aspects of the operator's ability to perform the intended task. Typically, all measures of cognitive workload should include the primary task performed by the operator. Primary tasks are only sensitive to overloads in workload, they are not typically sensitive to the potential for an overload to exist. Some primary tasks are globally sensitive. Since they are the primary task, they are not intrusive and have high operator acceptance. However, primary tasks are not diagnostic, and are generally not transferable. Their implementation requirements vary.
- Secondary task measures are categorized as either a subsidiary task paradigm or a loading task paradigm. In the subsidiary task paradigm, secondary task measurements evaluate how much of one or more resources are being consumed by the primary task. Users put emphasis on primary task performance. Secondary tasks are added to the primary task to impose an additional load on the operator. Analyzing performance decrements on secondary tasks determines how much resources are consumed. Properly choosing secondary tasks determines which resources are consumed.
- In the loading task paradigm, secondary task measurements determine when and how much the primary task deteriorates. Users put emphasis on secondary task performance while the degree of difficulty of the primary task is manipulated. Two or more primary tasks may also be compared for task deterioration with this paradigm.
- Secondary tasks are extremely diagnostic and may be sensitive to potential overloads. However, if a secondary task is chosen which is not employing the same resources that the primary task is, employing the secondary task will not be sensitive to changes in workload and may not display expected overloads. It is usually recommended that a battery of secondary tasks be used and this can be time consuming. Secondary tasks are, by nature, intrusive, tend to have high implementation requirements (e.g., for a tracking task, a joystick, screen, and computer are needed), and are expected to have low operator acceptance in operational environments.
- For the service representative embodiment, a subjective measure is chosen. As shown in
FIG. 5 , a decision must be made to select between various subjective assessment techniques such as the Modified Cooper-Harper 26, Subjective Workload Assessment Technique (SWAT) 28, NASA-Task Load Index (NASA-TLX) 30, or Subjective Workload Dominance (SWORD) 32. For the service representative embodiment, a NASA-Task Load Index (NASA-TLX) technique is chosen. However, any subjective workload measure chosen would still be within the spirit and scope of the present invention. - The NASA-TLX evolved from the NASA Bipolar scale. Similar to SWAT, the Bipolar scale was developed with the consideration that workload is multidimensional, thus, a measurement of workload should also be multidimensional. Developed after SWAT, the Bipolar was designed with nine scales because the Bipolar authors did not believe the scales in SWAT were sufficient. The Bipolar also recognizes that from task to task, the scales may vary in importance, and allows users to acknowledge these differences. In addition, this technique was developed to contain diagnostic scales, which could be rated based on subjective importance.
- The NASA-TLX inherited properties from the Bipolar scale with the exception that the NASA-TLX has six scales to allow for an easier implementation. The scales represent task characteristics (mental demand, physical demand, and temporal demand), behavioral characteristics (performance and effort), and operator's individual characteristics (frustration). These scales and their corresponding definitions are shown in
FIG. 6 - TLX also added the ability to consider individual differences through the weighting of the workload scales. TLX involves a two-part procedure consisting of both ratings and weightings. After the operator completes the task, numerical ratings are obtained for each of the six scales. The operator is given both the rating scale definition sheet and a rating sheet. On the rating sheet, there are twenty intervals with endpoint anchors for each of the six scales. Users mark the desired location for each scale. A score from 5 to 100 is obtained on each scale by multiplying the rated value by five. Depending on the situation, rating sheets, verbal responses, or a computerized version are considered practical.
- In the second part of TLX, users weigh the six scales. Paired comparison procedures are implemented for 15 comparisons, accounting for comparisons between all of the scales. Users choose the scale which most significantly created the level of workload experienced in performing a specific task. For each task and operator, each scale is tallied for the number of times it was chosen in the paired comparisons. Scales can have a tallied value between zero and five. Each new task requires users to rate and weigh the scales upon its completion.
- The ratings of each scale are arranged in a raw ratings column. Adjusted ratings are calculated by multiplying the raw ratings by the corresponding tallied scale scores. The adjusted ratings for all six different scales are then summed. The total sum is divided by 15 (for the number of paired comparisons) to obtain the weighted workload score (ranging from 0 to 100) for the operator in that task condition. Analysis of the data can then be performed.
- Due to the multidimensional properties of workload, some level of diagnosticity may be distinguished by using TLX. Generalized conclusions may be made based on operator strategies and on weightings and judgments of the six dimensions of mental demand, physical demand, temporal demand, performance, effort, and frustration.
- TLX is not considered intrusive because it is performed after the task is completed. Implementation requirements are typically low; the definition sheet, rating sheet, and paired comparisons are needed for every operator and task. Some time may be required for users to practice with and familiarize themselves with the scales. Operator acceptance is typically high and TLX is usually transferable.
- In addition, TLX was robust against slight (e.g. 15 minutes) delays in operator ratings and in non-controlled order effects. TLX is also considered potentially more sensitive at low levels of workload compared to SWAT, and TLX's paired comparison procedure may be omitted without comprising the measure.
- For the service representative embodiment, a goal is to determine how a new system interface should be designed to increase the performance of service representatives. Since a current system interface exists, the question must then be asked, why use a cognitive workload assessment technique to determine how the system interface should be re-designed? Before this question can be answered, the current system interface and how it is used by the service representatives to sell products, services, and packages must be examined.
- In a typical existing system interface, service representatives need to examine a variety of screens depending on the products, services, or packages to be sold. It is assumed that due to the number of screens a service representative must examine, performance is not at an optimum level. It is also assumed that by redesigning the system, performance should improve. Therefore, first the performance of the service representatives on the current system needs to be determined. It is expected that the performance data will display a range of values and that groups of these values will represent selling strategies. For example, service representatives who have high levels of performance (high sales revenue per month) are expected to use different selling strategies from service representatives who have low levels of performance. Thus, one of the goals of the system interface design team is to determine the strategies used by service representatives and their corresponding performances. Once this information is known, it will be more feasible to know how the system needs to be redesigned. This information should provide direction to interface designers on how to redesign the system to improve performance. It may also direct what types of strategies should be taught to the lower performing group(s).
- Based on performance data alone, groups of service representatives are not easily distinguishable. Different strategies result in different performances. In addition, it is expected that each of the strategies may result in a range of performances. Thus, performances from the different strategies are expected to overlap, making it unclear which service representatives use which strategies. Since one of the goals of the system interface design team is to determine the strategies used by service representatives, it is important to know which service representatives use the same or different strategies.
- For the service representative embodiment, there are three reasons for using a cognitive workload assessment technique to determine if and how a new system should be designed. First, a cognitive workload assessment technique will result in one of three possible outcomes: all service representatives are overloaded, none of the service representatives are overloaded, or some of the services representatives are overloaded. If the results are that some or all of the service representatives are overloaded, the system should typically be redesigned to lower the load. If none of the service representatives are overloaded, the system may not necessarily have to be redesigned. It may be cost justified to train different strategies to the service representatives who are performing at a lower level. Thus, a cognitive workload assessment technique will help to determine if the system should be redesigned.
- Second, since it is expected that degrees of cognitive workload correlate to types of strategies, measuring cognitive workload is an indirect way to measure types of strategies. Furthermore, measuring cognitive workload is quick, easy to perform, and inexpensive. Conversely, determining each service representative's strategy for each task would take a considerable amount of time, would require a lot of effort, and would be very expensive. By obtaining cognitive workload measurements, groups of service representatives may be parsed out and a small number of service representatives in each group may be examined for their strategies. A graph of cognitive metric and performance metric is expected to help parse out the groups, given the assumption that the degree of cognitive workload and user-characteristics are highly correlated to strategy. An example of this graph is shown in
FIG. 7 . - In this figure, the Blue Group has higher performance and lower cognitive workload, the Green Group has medium performance and higher cognitive workload, and the Yellow Group has lower performance and lower cognitive workload. From this data, individuals in each group could be examined for strategies used during their tasks. It is expected that strategies within a group would be similar, but between groups would be different.
- Through understanding the strategies, a new system may be designed more appropriately. Also, knowledge of the strategies used by the service representatives facilitates the system redesign such that better strategies are easier to use and understand, and other strategies are not hindered in the system redesign. A goal in redesigning the system is to improve performance in as many of the groups as possible. However, the redesign should not trade off one group's improved performance for another group's impaired performance. The redesign should also result in an overall improvement in performance compared to the old level of performance. The lack of improved performance would suggest that no group improved because both improvements and detriments had been redesigned into their tasks.
- Third, a cognitive workload assessment technique may provide clues to improve the system. The diagnosticity of the technique may give some insight as to where the problems which result in lower levels of performance occur and how to design the system to eliminate such problems. For example, using a multidimensional subjective technique, it may be found that one group feels overloaded on the dimension of mental effort. It may be they feel there is too much to remember, and if the system was redesigned as menu-based to lower the use of memory, the mental effort load would be decreased to a more satisfactory level.
- In addition to these three reasons for using a cognitive workload assessment technique to determine how a new system should be designed, some of these techniques may also be used to evaluate the newly designed system. Some of the cognitive workload assessment techniques may be used to determine if the new system decreases the service representatives' loads. Several cognitive workload assessment techniques can be used at any stage of development. A properly chosen technique can signal design problems early in the development of a new system.
- Any system interface design effort would be benefited by measuring the cognitive workload of the current tasks. As mentioned previously, this information may help determine if the system needs to be redesigned. In addition, the cognitive workload of a task should not be measured relative to another task; rather it should be an absolute measurement for the system interface design team. This restricts SWORD from being recommended. The measurement should give diagnostic information so that if the system needs to be redesigned, information from the cognitive workload measure will help indicate what should be redesigned and how it should be redesigned. This restricts MCH from being recommended. It is unclear whether the measure needs to be sensitive to low levels of workload. Therefore, SWAT or NASA-TLX could be recommended as the subjective measure for the service representative embodiment. However, NASA-TLX was chosen over SWAT as the recommended technique because sensitivity to low levels of workload may be required. The recommendation is also based on some of NASA-TLX's properties. As compared to SWAT, NASA-TLX is fast and easy to perform. Service representatives will probably have a higher acceptance of it than of SWAT.
- Although The NASA-TLX is thus seen to be best for the service representative embodiment, it was modified. The NASA-TLX currently contains six scales, namely, mental demand, physical demand, temporal demand, performance, effort, and frustration. Since service representatives are not affected by physical demands, this scale was removed from the TLX. In addition, effort is a difficult scale to define. In pre-study testing, effort was confused with mental demand. Therefore, effort was also removed from the TLX. Furthermore, the performance scale was also removed from the TLX since users may view the performance scale as a scale related to their performance reviews.
- The modified TLX for use in the service representative embodiment thus contains the three scales of mental demand, temporal demand, and frustration. Similar to the original NASA-TLX technique, these three scales will be rated and compared. However, service representatives will only perform one rating based on their tasks that day; each task will not be individually rated. An exemplary cognitive workload TLX survey is shown in
FIGS. 8 and 9 . Instructions and definitions for the survey are shown inFIG. 8 , while the survey is shown inFIG. 9 . Exemplary steps outlining procedures for administering the survey and modified NASA-TLX instrument, using the service representative embodiment, is shown inFIG. 10 . - After the survey data is received from the users, it is combined and analyzed. This includes graphing or charting the data and identifying groupings or clusters on the graph or chart. These groups suggest users with similar behaviors. One or more users from each group is selected. These selected users will undergo a more detailed analysis. An exemplary outline of steps and procedures for combining and analyzing the survey data is shown in
FIG. 11 . - Describing User Behavior
- A task analysis is a method employed to enable one to understand how a user performs a specific task and to describe this behavior. Task analyses allow interface designers an understanding of what must be done to accomplish a task. They may also obtain insight into how a task can be better accomplished and what is needed to better accomplish the task. All of this information facilitates the development of a new system interface.
- In addition, task analyses may help system interface requirements development by determining what functionality is necessary or desired in a system interface. Functionality refers to those functions in a system that users find useful in accomplishing their tasks. Furthermore, functionality together with a well-designed interface should result in a system that is easy to learn and use.
- Behavioral characteristics are user characteristics that are not self-rated and are usually determined through a task analysis. Previously, it was noted that the user characteristics are measured through a survey which is self-rated by the user. Behavioral characteristics are not necessarily known to the user or may not be well communicated. Examples of behavioral characteristics are the user's actual method of navigation and use of serial processing or parallel processing.
- Users may not notice when they use menus, compared to when they use shortcut keys. A task analysis, where the user is being monitored, may provide more insight into behavioral characteristics since the user is actually performing the task. Each action the user performs to accomplish the task is recorded in a task analysis. The record can show when menus are used versus shortcut keys versus other navigational procedures for a user. Different groups of users are expected to use different navigational techniques.
- Users may also not be familiar with their processing methods. Serial processing is the ability to perform one action (mental or motor) at a time, while parallel processing is the ability to perform more than one action. Whether a user tends towards serial processing or parallel processing may be best determined in a task analysis. As previously noted, each action that the user performs to accomplish the task is recorded in a task analysis. The analysis record can show when the user is performing one action at a time versus performing a variety of actions at the same time. Different groups of users are expected to use different processing techniques.
- Since behavioral characteristics are not observed until the task analysis has been performed, these characteristics are used to validate the categorization of the groups of users. If the previously categorized groups of users are found to have different behavioral characteristics, then the groups will need to be re-categorized. It is important that the behavioral characteristics within the groups be similar so that the models are accurate representations of the groups.
- For this embodiment of the present invention, a subset of individuals in each categorized group are observed for behaviors used to perform their tasks. Two behaviors employed by the service representatives that made a predominant impact on their job performance were the number of cross-selling attempts made to the customer and the length of the call. For example, representatives who did not make any cross-selling attempts and quickly performed the service requested by the customer, quickly completed a large number of customer sales. This behavior typically resulted in a large number of low revenue calls. While other service representatives talked longer to the customer to determine the most likely types of products or services that they could successfully cross-sell to the customer. This behavior typically resulted in a smaller number of higher revenue calls. Based on the normalized number of cross-selling attempts per call and normalized average length of call, the representatives were grouped by similar behaviors.
- Each of the service representatives observed were charted in a graph where the normalized average number of cross-selling attempts was graphed on the x-axis and the normalized average length of call was graphed on the y-axis. This was done for each observed service representative for a variety of call types (task types). Groups of representatives who were observed to have similar behaviors over different call types were grouped together. These behavior-based groups were then used to validate the cognitive and performance categorized groups.
- The behavior-based groups were also used in the GOMS analyses. A GOMS analysis is another type of task analysis that was used to facilitate the description of the different groups of users.
- A GOMS model is a task analysis method that indicates the steps a user must accomplish to complete a task in the form of a model. The model can be used to help choose the appropriate functionality for a system. The model can also calculate if the task is easy to learn and use.
- GOMS has empirically validated the relationship between human cognition and performance. GOMS is based on the Model Human Processor. The Model Human Processor is a model of a user interacting with a computer. It can be described by a set of memories and processors together with a set of principles.
- First, sensory information is acquired, recognized, and placed into working memory by perceptual processors. A cognitive processor then handles the information and commands the motor processor to perform physical actions. The principles guide how the processors function. This is a simplified model of a user interacting with a computer, but it does facilitate the understanding, predicting, and calculating of a user's performance relevant to human-computer interaction.
- GOMS is an acronym that stands for Goals, Operators, Methods, and Selection rules. GOMS uses these components to model a human's interactions with a computer. Goals refer to the user's goals. What does the user want to accomplish? Goals are typically broken down into subgoals. Operators are the actions the user performs with the computer interface to accomplish the Goals. Examples of Operators are keystrokes, mouse movements, menu selections, etc. Methods are the arrays of subgoals and Operators that perform a Goal. Since GOMS models are not based on novice performance, the Methods are routine. Selection Rules are personal rules users follow to determine which Method to use if more than one Method can accomplish the same Goal. The Goals, Operators, Methods, and Selection rules combine to model how a user performs a task.
- GOMS models cover three general issues. First, they cover lower-level perceptual-motor issues. For example, GOMS discerns the effects of interface arrangement on keystroking or mouse pointing. Second, GOMS models display the complexity and efficiency of the interface procedures. Eventually the user must determine and execute a procedure to perform useful work with the computer system. Third, GOMS models examine these components and how they interrelate in the design of the system.
- GOMS models are approximate and include only the level of detail necessary to analyze the design problem. It is not necessary to have all parts of an analysis examined at the same level. Some design situations may require some areas of analyses to be examined to the level of primitive Operators (Operators at the lowest level of analysis), while other areas may be analyzed with higher-level Operators. GOMS models allow selective analyses.
- Ideally, a GOMS model will produce quantitative predictions of performance earlier in the development cycle than prototyping and user testing. In addition, this model will also predict execution time, learning time, errors, and will identify interface components that lead to these predictions. Changes to these interface components will produce quantitative changes to the predictions of performance.
- One of the major assumptions that is universally established for the GOMS models is that of the experienced user. That is, that the behavior that is being described with GOMS task analysis is that of an experienced user. This experienced user assumption and its significance is now discussed.
- The term experienced user is meant to identify users whose performance and behavior has stabilized. In particular, the behavior and performance of an experienced user is considered to be stabilized to the point that the particular user accomplishes their tasks in the same manner and style for each task execution. This means that their task time would be somewhat consistent. This also means that their error rate would be minimal. Stabilized performance also means that the user is not still learning the system, but has established (and repeats) their interaction, behavior, and style. If the user is still learning the system, it would be expected that their task time would improve as they gain additional experience.
- Even more significant than task time and error rate, the experienced user has established a defined strategic approach to task completion. That is to say, that the user has worked with the system long enough to adapt where it is appropriate, and selected an approach to task completion that best matches the user's capabilities and style. Thus in defining an experienced user, one is really looking for the following characteristics: stable task time, minimal errors, and established task strategy.
- The expert user and the experienced (lower performance) user may differ in their established task strategies, resulting in different types of user models for the two types of users. These models could then be used to facilitate the design of a system for the different groups.
- Within a grouping of users, the behaviors interacting with the system will be generally similar. Users from each group would be selected and their behavior observed and documented. A CPM-GOMS approach has been modified to describe and document these user behaviors. Using the method according to the present invention, behaviors are examined for similarities within a group and differences between groups of users. For example, a group of expert users may, to different degrees, use parallel processing for their cognitive activities while novices may use serial processing. The emphasis of these descriptions focuses on behaviors that affect performance and are then incorporated into the user models. There are limits to the descriptions, both in time consumption and user knowledge. Only the functions that are frequent and important are formally described.
- GOMS has developed into a family of cognitive modeling techniques. The GOMS family contains four techniques, all based on the GOMS concept of Goals, Operators, Methods, and Selection rules. All of the techniques produce quantitative and qualitative predictions of user performance on a proposed system. A decision must be made between the GOMS modeling techniques. The techniques are shown in
FIG. 12 and include GMN-GOMS 34,KLM 36,NGOMSL 38, and CPM-GOMS 40. Any of the above, or other, GOMS modeling or task analysis techniques used would still be within the spirit and scope of the present invention. - For a GOMS technique to be used in the design process, the user's task must be goal-oriented, routinized skill must be involved, and the user must control the majority of task progression versus the computer system or other agents controlling the majority of task progression. Given that restriction, choosing which GOMS technique should be used in the design process is typically based on two factors that relate to the design situation. These two factors are the type of tasks the users perform and the types of design information the GOMS technique obtains.
- The types of tasks the users perform are divided into serial-based or parallel-based task. Serial Operators can approximate many tasks, such as text editing. If a task can be appropriately represented by serial Operators, a serial processing GOMS technique should be used (CMN-GOMS, KLM, or NGOMSL). However, not all tasks can be approximated by serial Operators. For example, a task in which a service representative is concurrently talking to the customer and typing information into the system is more appropriately represented as tasks occurring in parallel. For these tasks, CPM-GOMS, the parallel processing GOMS technique should be used.
- The types of design information the GOMS technique obtains are divided into functionality, including coverage and consistency, operator sequence, execution time, learning time, and error recovery support. Functional coverage refers to the system's ability to provide some reasonably simple and fast Method to accomplish every Goal. After the users' Goals are determined, typically all GOMS Methods can provide functional coverage. Functional consistency refers to the system's ability to provide similar Methods to perform similar Goals. NGOMSL is the most appropriate technique to use for functional consistency because it employs a consistence measure. This measure is based on learning time predictions in which a consistent interface uses the same Methods throughout for the same or similar Goals, resulting in fewer Methods to be learned.
- Operator sequence refers to whether a technique is capable of predicting the sequence of Operators a user must perform to accomplish a task. CMN-GOMS and NGOMSL can predict the sequence of motor Operators a user will execute while KLM and CPM-GOMS must be supplied with the Operators. The ability of CMN-GOMS and NGOMSL to predict the sequence of Operators is useful in deciding whether to incorporate a new Method into a system. It is also useful in determining how to optimally incorporate training in the use of the new Method. Although CPM-GOMS does not predict Operator sequence for parallel processes, it can be used to examine the effects of design modifications which may alter Operator sequence.
- Execution time can be predicted by any of the GOMS techniques given that the user is well practiced and makes no errors throughout the task. Due to the restrictions of thoroughly experienced users and no error performance, the predicted times are actually predictions of optimal execution times for a task. Many predictions of execution times with GOMS techniques have been documented and suggest that further empirical validation is unnecessary.
- Learning time is only provided by NGOMSL. This technique measures the time to learn the Methods in the model and any information required by long-term memory to accomplish the Methods. Since absolute predictions of learning time include many complexities, NGOMSL should be limited to learning time predictions for appropriate comparisons of alternative designs.
- Error recovery support refers to helping users recover from an error once it has occurred. GOMS typically recognizes whether the system offers a fast, simple and consistent Method for users to apply when recovering from errors. Any of the GOMS techniques can be used to measure the error recovery support of a system.
- A table that can be used to help choose the most appropriate GOMS technique based on the type of tasks the users perform and the types of design information is shown in
FIG. 13 . - CPM-GOMS stands for Cognitive, Perceptual, and Motor. It performs at a level of analysis in which the user's cognitive, perceptual, and motor activities are included as simple primitive Operators. This GOMS technique allows for parallel processing, unlike the other three GOMS models. Thus, cognitive, perceptual, and motor activities can be performed in parallel as the task demands.
- CPM-GOMS uses a schedule chart (PERT chart) to display the Operators. PERT charts clearly present the tasks which occur in parallel. CPM-GOMS also stands for Critical-Path Method because PERT charts calculate the critical path (the total time) to execute a task. The PERT charts serve as quantitative models as they tell how long certain activities take, and may assign numerical values to tasks. These charts are used by the system interface designers, along with the qualitative models, to design the user interface.
- CPM-GOMS is based on the Model Human Processor (MHP). In the MHP, the human is modeled by three processors, namely, a perceptual processor, a cognitive processor, and a motor processor. Although each processor runs serially, processors can run in parallel with each other. CPM-GOMS directly utilizes the MHP by recognizing each of the processors that perform the Operators. It also recognizes the sequential dependencies between the processors. Because CPM-GOMS assumes that the user can perform as fast as the MHP can process, the user must be thoroughly experienced in the task. CPM-GOMS does not model novice users.
- For the service representative embodiment, the CPM-GOMS technique will be chosen as the basis for the modeling technique to be used. The CPM-GOMS technique was chosen because it can most appropriately represent the parallel activities that service representatives engage in while performing their tasks for this embodiment.
- As previously indicated, CPM-GOMS performs at a level of analysis in which the user's cognitive, perceptual, and motor activities are included as simple primitive Operators. Since GOMS, in general, includes only the level of detail necessary to analyze the design problem, simple primitive Operators will only be examined when necessary for this project. It is expected that discrete groups of users will show behavioral and performance differences in the accomplishment of a task at a level of analysis higher than simple primitive Operators.
- The CPM-GOMS technique will incorporate the perceptual and cognitive processes together as one general processor. Thus, any internal Operators will be labeled as general mental Operators. The combining of perpetual and cognitive Operators into one classification was developed to simplify the models and to maintain a level of analysis higher than primitive Operators.
- Since the CPM-GOMS uses a PERT chart to display the Operators, it can calculate the critical path of the task. The critical path is the sequence of Operators that produce the longest path through the chart. The sum of the sequence of these Operators equals the total time to execute the task. Empirical data from actual performance of observable motor Operators will be used in the PERT chart. Both empirical data and data from cognitive psychology will be used to determine execution times of the mental Operators.
- These execution times are helpful to determine the overall time of execution per processor/category for each task. For example, in a hypothetical task, the service representative's execution time may be divided into 48% talking (motor operators: verbal responses), 39% listening (mental operators), 8% waiting for the system (system response), and 5% typing (motor operators: hand movements). The execution times are also instrumental when comparing different tasks, the same task in a different system, or the performance of different types of users.
- Another reason CPM-GOMS technique was chosen relates to its ability to compare alternative designs that aren't currently built or prototyped. The CPM-GOMS models of the existing system can determine the effects of changes to current tasks, which in turn can facilitate the development of the proposed system. The models can also be used as baseline models for the proposed (redesigned) system. The models can be compared with models of the proposed system to examine the efficiency and consistency of the proposed system and the ability for users to convert to the proposed system.
- The CPM-GOMS technique can also be used to help develop the documentation and training material. Typically, documentation should be task-oriented. GOMS provides a theory-based empirically validated, and systematic approach to determine the necessary content of task-oriented documentation and training material. In addition, since CPM-GOMS can predict execution time differences between different Methods, the most efficient Methods and Selection rules could be highlighted in the documentation and users could be educated and encouraged to adopt these in training sessions.
- The CPM-GOMS technique used will document four major categories, namely, system response, customer response, mental operators, and motor operators. Mental operators include perceptual and cognitive Operators. Motor operators are divided into manual movements, verbal responses, and eye movements.
- All aspects of each user's task will be sequentially categorized into cells during the task analysis. A cell is a categorized event at an appropriate level of detail. Customer response, motor operators, and system response will be recorded in the cells when the user is performing the task. Mental operators will be recorded in the cells after the completion of the task. Mental operators will be added based on empirically validated theories in cognitive psychology and input from the users.
- After the task is broken into cells and all of the cells are determined and categorized, the cells are connected to each other based on their precursor needs. Next, execution times are included for each cell and then the CPM-GOMS model emerges.
- There will be three levels of detail in task analysis: A, B, and C. The level “A” task analysis will be when design team members are trained to capture observed behavior while sitting and recording the service representative's actions. The level “B” task analysis will be captured primarily through videotape and keystroke data. The level “C” task analysis will be by capturing key strokes, user attention through eye tracking, and through video tape.
- Task analysis detail, level “A”, is focused on team members capturing behavior that can be observed while sitting next to the service representative and recording their actions manually. The steps of this level are shown in
FIG. 14 . - Task analysis detail, level “B”, is focused on team members capturing behavior that can be observed and recorded on video while the service representative is in their normal environment or while in a laboratory environment. The recorded information will be video and audio. The steps of this level are shown in
FIG. 15 . - Task analysis detail, level “C”, is focused on team members capturing behavior that can be observed and recorded on video and, most importantly where the service representative is looking or searching, while the service representative is in a laboratory environment. Although the recorded information will be video and audio, the focus for this level is on the eye tracking information and data. The steps of this level are shown in
FIG. 16 . - The information and data that is gathered during this level of task analysis can be very useful during the design phase of systems development. One particular eye tracking data set is the characteristic search patterns of the users. How the users search the screen provides some excellent insights into how the users process the information being presented on the screen. This information and data from the eye tracking analysis assists in describing how users (actually groups or categories of users, e.g., blue group) perform tasks and how the interface assists them in performing those tasks. The sequence and duration of eye movements provides valuable information into how the user is using the information on the screen to make decisions. The sequence or pattern of eye movements is an indication of the strategy that the user is employing. The duration of eye movements is an indication of the user showing attention to that particular item of detail. This knowledge of strategy and attention is important to constructing a model of user behavior, i.e., how the blue group accomplishes their tasks.
- This level of task analysis is at the lowest level of detail which thus generates a lot of data. One of the potential problems with this large amount of time-oriented data is keeping the various data files properly registered. This time registration problem is compensated by, at the start and at the end, having the user perform a unique task that can be identified in all of the data files. An example of this unique task would be looking at the lower left part of the screen while pressing the “w” key for a period of five seconds.
- The various data files to be discussed are key press, eye movement, video, audio, screen display, and screen display objects. The “key press” file needs to capture which keys were pressed, when they were pressed, and mouse movements. The mouse movements would only record location (the X and Y location) when the mouse is clicked or double clicked.
- The “eye movement” file needs to capture the location and duration of eye fixations. These eye fixations have a minimum duration threshold of 50 milliseconds so that simple eye movements from one location to another are not included in the file.
- The “video” file needs to capture the hand movements, screen display changes, notes taken by the user, and user facial expressions. This video would allow for 20 millisecond frame increments. The file format is the normal videotape format with time stamps.
- The “audio” file needs to capture the voice, speech, and sound from the user, customer, computer, and other devices. The file format is the normal audiotape format with time stamps.
- The “screen display” file needs to capture the image that was displayed on the screen for the user to observe. The screen image is only recorded when the image changes. The screen capture file will be in the normal format.
- The “screen display objects” file needs to capture the location of the various objects that comprise the screen display.
- Exemplary data file formats for key press, eye movement, screen display, and screen display objects are shown in
FIGS. 17A, 17B , 17C, and 17D respectively. - The output from the describing the behaviors activity (CPM-GOMS task analysis) is a PERT chart as shown in
FIG. 18 . This PERT chart is then used to create the block flow diagram inFIG. 19 . Both of these are used in creating the user behavior models. - Modeling User Behavior
- When the users' behaviors are well understood within a given user group, a user model can be constructed. There are two types or levels of models: qualitative and quantitative. Both of these model types are useful to the system interface design team and accomplish different objectives.
- The qualitative models are statements of how users within a specific user group behave in certain situations or performing certain functions. For example, a qualitative statement for a user may be “This type of user has a great desire to navigate within the system quickly and is capable and willing to memorize short-cut keys in order to jump from screen to screen quickly.” These qualitative models make a contribution to the design team by allowing design team participants to specifically represent each of the various user groups in the design process. As the various design decisions are addressed, these insights are extremely valuable, so that the various user group needs are not lost in the development process.
- The quantitative models also represent the behavior of a specific user group, but in a manner that has greater detail and with greater precision. The quantitative models are more formal and incorporate the capability to make numerical performance predictions unlike the qualitative models. These models make use of programming languages, which are well suited for such representation. Items such as arrival patterns, resources allocations, task duration times are also included and combined with the flow of incoming work. These quantitative models are developed only to the degree of detail necessary to adequately represent the user groups for the design team during the system development process.
- The process of building a model has some fairly well defined steps. These steps can be generalized as follows:
-
- Establish objectives and constraints. Focus on defining the problem as precisely as possible, with clearly defined project goals. Ensure the resources are available. Determine the boundaries of the project, i.e., what can and cannot be done. Build a conceptual, preliminary model. Select effectiveness measures, factors to vary, and the levels of those factors.
- Gather, analyze, and validate system data. Identify and prepare input data.
- Build an accurate, useful model, including computerizing it. Verify and validate the model by confirming the model operates the way it was intended and the model is representative of the actual process.
- Conduct simulation experiments. Finalize experimental design. Analyze and interpret results.
- Document and present results. Assist with implementation, if requested and necessary to ensure project success.
One of the most important steps in modeling building is the establishment of objectives and effectiveness metrics.
- The process of describing behaviors was discussed previously. These behavioral descriptions are in the form of modified CPM-GOMS flow diagrams. These flow diagrams are analyzed by identifying common elements that exist in many of the task descriptions.
- A subjective judgment is made to determine how to combine or suggest that a set of behavioral descriptions are sufficiently similar to construct the same user model. This aspect of analysis requires a large base of experience. This experience is both beneficial and worthwhile. Satisfying this subjective question of the existence and location of a set of behavioral descriptions that are sufficiently similar in nature has tremendous challenges, but is critically important. One of the major considerations in this effort is deciding the relative consistency within each user group. Or, put another way, how tightly consistent should the behavioral descriptions be within a given user group.
- The greater the consistency within a user group, the greater validation of the model. At the extreme, having a large inconsistent user group is the situation that is common today where the user is viewed as one perspective. This large, inconsistent user group is not desireable, but rather a smaller, consistent set of behaviors within a group is desired. The model that is then generated can be relatively tight. However, if a given group or category is allowed to be loosely grouped or loosely defined by including a broad set of behavioral characteristics, then the set of behavioral descriptions will be correspondingly broad and the model that is generated will be relatively loose.
- The actual construction of the user models from the descriptions is fairly straightforward for the external task actions, but can be challenging for the internal mental task actions.
- In order to maximize the relationship between the model and the behavioral description, the model layout is similar to the description layout. The behavioral description has four categories of Service Representative Verbal, Service Representative Motor, Service Representative Cognitive, and Customer. These four categories will be mapped to the model construction also. The format of the model construction is shown in
FIG. 20 . - The flow of the user model in
FIG. 20 begins at the “create” box in the upper left corner. This “create” box represents the arrival of a phone call to this service representative. Other parts of the model, not shown, describe and represent the behavior of the arrival of phone calls. Phone calls from customers have been shown to form a pattern of arrivals dependent upon time of day, day of week, day of month, and month of year. Each customer phone call has a set of attributes or characteristics. An example of an attribute is the type of call. An example of call type is a customer wanting to disconnect their service because they are moving to Florida. The processing of this call type begins by the service representative answering the phone call, “SR Talks”, with a greeting “Southwestern Bell Telephone. How may I help you?” (SR is the service representative). This action is represented with the action box in the lower left corner. The notation below this box, and the action boxes also, is a calculation for the duration of this specific action. The action's duration is how much time it will take to accomplish this action. The “TRIA” function is a common distribution used in simulation that has a minimum, mode, and maximum time period. In this case, these are represented by variables of SRV1, SRV2, and SRV3, respectively. - After the service representative asks what the customer wants, the customer replies, “Cust. Talks”, that they wish to disconnect their service. Again, the TRIA function denotes the length of time that this action by the customer requires. Upon hearing the customer's request, the service representative makes a mental decision, “SR Decides”, as to what operational function will accomplish the customer's request. The first item of information to begin the disconnect function is the customer's name (and phone number), so the service representative requests, “SR Talks”, that information. The customer replies, “Cust. Talks”, with the requested information and, as the customer is talking, the service representative begins to enter, “SR Types”, the information into the system.
- When the customer is finished providing their name (and phone number) and the service representative is finished entering the information into the system, the service representative moves their mouse, “SR Mouses”, to the “disconnect” button on their screen and clicks the mouse button. The remaining tasks to complete the disconnect function are not illustrated.
- In this example, the user model is a series of action boxes, which, by their sequence and delay time, represent how a user will interact with a customer who wishes to disconnect their service. It is important to note that this flow illustrates how one group of users would handle this type of call. A different user group who has a reference of shortcut or “hot” keys rather than using the mouse would have a different user model. In particular, rather than the extreme right box of “SR Mouses”, their user model would have a box of “SR Keys”. The significance of this difference is that the time duration for a “SR Keys” action is shorter than the time duration for a “SR-Mouses” action.
- The benefits of a user model become clearer with this example. The system or interface designers can anticipate how different user groups would interact with the various screens, how long each user group would require to process each of the various screens, and can identify potential improvements so that the screens are better suited to the characteristics of the different user groups.
- The user models that will be initially constructed will be verified and validated to a certain extent. The models can be improved dramatically by using a refining stage after the initial models are constructed. The initial models will appear to be valid to, possibly, a degree that may be acceptable. However, it is important to note that the model's validity, for almost all models, will be improved by using the procedure discussed later. It is extremely useful to encourage the user model designers to enhance the validity for a potentially small price (in terms of time and effort).
- One purpose of constructing a user model is to have the ability to predict user behaviors in the future. There are two primary purposes of a user model that is predictive in nature. The first purpose being in the design phase of system development. It is of great value for the interface designer to be able to understand the user behavior for a given design; The designer can better evaluate various alternative designs and their impact on user performance. The second purpose focuses on operational performance. In most cases, the impact of a new system design on overall business process cannot be predicted. Of course, operations management is very concerned with the impact of a new, or re-designed, system on overall performance. In fact, most system development projects are justified on improving overall performance. It is important that the design of these new systems support the improvement of performance metrics.
- Defining the metrics used to analyze the outcomes of the user model is critical, because these simulation outcomes will assist in evaluating the various interface designs. These interface designs will also assist in justifying the cost/benefit of a development project. One example of a primary simulation outcome is shown in
FIG. 21 . The data inFIG. 21 is for illustration purposes only and does not represent actual results. - One use of the information in
FIG. 21 is an illustration of where time is consumed by call category. For example, for a Disconnect call type, 83% of the time is consumed with talking (both customer and service representative). Ten percent of the time is consumed with system response, 4% of the time is consumed with keying activity, and 3% of the time is consumed with reading information from the screen. For this call type, the benefits of spending resources towards reducing the keying time would probably be limited because keying is only 4% of the total time. However, investigating the sub-activities within the talking function could have dramatic impact of this call type. A 24% improvement in the keying function would represent a 1% (25% of 4%) overall improvement in time duration, while a 25% improvement in the talking function would represent a 21% (25% of 83%) overall improvement in time duration. The usefulness of this type of information (that is contained inFIG. 21 ) is increased when the team is aware of effective resource allocation. These type of simulation outcomes can be an excellent source of guidance for the design team. - A table showing exemplary steps to construct the user model with CPM-GOMS flow diagram, using the service representative embodiment, is shown in
FIG. 22 . After the model is developed, it should be refined. An exemplary procedure to refine the model, using the service representative embodiment, is shown inFIG. 23 . - The present invention has been illustrated using an embodiment of the service representative using a computer interface, but is not limited by this embodiment. This invention can be applied to modeling any type user for any type of user interface, such as airplane cockpits, machine control panels, motor vehicle dashboards, boat control panels, as well as computer graphical user interfaces. This invention is not limited by these, but is meant to cover these and other applications or embodiments that are within the spirit and scope of the invention.
Claims (20)
1. A method of designing a user interface based on a list of user qualities and interactions of users, the method comprising:
categorizing the users into groups based on at least one of user characteristics, performance characteristics, behavioral characteristics, and cognitive workload; and
designing the interface based upon the categorized groups and on goals for the user interface.
2. The method according to claim 1 wherein the categorizing is based on at least the cognitive workload of the users that includes at least one of subjective measures techniques, performance-based measures techniques, and physiological measures techniques.
3. The method according to claim 2 wherein the cognitive workload is based on subjective measures techniques that judge the user's subjective workload.
4. The method according to claim 2 wherein the cognitive workload is based on performance-based measures techniques that measure a user's ability to perform a task and either a subsidiary paradigm or a loading task paradigm.
5. The method according to claim 2 wherein cognitive workload is based on physiological measures techniques that include examining physiological responses to task requirements.
6. The method according to claim 1 wherein the categorizing is based on at least the cognitive workload of the users that includes at least one of system demands, processing resources, and effort expenditure.
7. The method according to claim 1 further comprising:
describing user behavior information associated with each of the groups; and
modeling the behavior information of each group,
wherein the designing is further based on the modeled behavior information.
8. A method of designing a user interface based on a list of user qualities and interactions of users that are categorized into groups based on goals for the user interface, the method comprising:
describing the user interactions and the user qualities of each of the groups; and
designing the interface based upon the described user interactions and user qualities.
9. The method according to claim 8 wherein describing the user interactions of each group further comprises:
selecting at least one user from each group; and
obtaining additional behavioral information from the selected user.
10. The method according to claim 8 wherein the describing the user interactions and the user qualities is based on one of a plurality of GOMS analyses.
11. The method according to claim 10 wherein the describing the user interactions and the user qualities further comprises:
analyzing user goals for an interface.
12. The method according to claim 10 wherein the describing the user interactions and the user qualities further comprises:
analyzing actions a user performs with an interface.
13. The method according to claim 8 wherein the describing user interactions and the user qualities is based on a CPM-GOMS analysis.
14. The method according to claim 8 further comprising:
modeling the described user interactions and the user qualities of each group to aid in the design of the user interface.
15. A method of designing a user interface based on a list of user qualities and user interactions of users that are categorized into groups based on goals for the user interface, the method comprising:
modeling user interactions and the user qualities of each group with qualitative models and quantitative models; and
designing the interface based upon the modeled user interactions and user qualities.
16. The method according to claim 15 wherein modeling the described user interactions and the user qualities further comprises:
establishing objectives and constraints of the interface; and
gathering, analyzing, and validating system data of the interface.
17. The method according to claim 16 wherein modeling the described user interactions and the user qualities further comprises:
conducting simulation experiments on the interface; and
analyzing and interpreting results of the simulation experiments.
18. The method according to claim 15 wherein the qualitative models describe how users behave or perform.
19. The method according to claim 15 wherein the quantitative models represent the behavior of each of the groups.
20. The method according to claim 15 further comprising:
listing the user qualities and the interactions of users based on goals for the user interface;
categorizing the users into groups based on at least one of the user qualities and the user interactions; and
describing the user interactions and the user qualities of each of the groups,
wherein modeling further comprises modeling the user qualities and user interactions based upon the goals for the interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/920,201 US20050015744A1 (en) | 1998-06-03 | 2004-08-18 | Method for categorizing, describing and modeling types of system users |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/089,403 US6405159B2 (en) | 1998-06-03 | 1998-06-03 | Method for categorizing, describing and modeling types of system users |
US10/134,430 US6853966B2 (en) | 1998-06-03 | 2002-04-30 | Method for categorizing, describing and modeling types of system users |
US10/920,201 US20050015744A1 (en) | 1998-06-03 | 2004-08-18 | Method for categorizing, describing and modeling types of system users |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/134,430 Continuation US6853966B2 (en) | 1998-06-03 | 2002-04-30 | Method for categorizing, describing and modeling types of system users |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050015744A1 true US20050015744A1 (en) | 2005-01-20 |
Family
ID=22217456
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/089,403 Expired - Lifetime US6405159B2 (en) | 1998-06-03 | 1998-06-03 | Method for categorizing, describing and modeling types of system users |
US10/134,430 Expired - Lifetime US6853966B2 (en) | 1998-06-03 | 2002-04-30 | Method for categorizing, describing and modeling types of system users |
US10/920,201 Abandoned US20050015744A1 (en) | 1998-06-03 | 2004-08-18 | Method for categorizing, describing and modeling types of system users |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/089,403 Expired - Lifetime US6405159B2 (en) | 1998-06-03 | 1998-06-03 | Method for categorizing, describing and modeling types of system users |
US10/134,430 Expired - Lifetime US6853966B2 (en) | 1998-06-03 | 2002-04-30 | Method for categorizing, describing and modeling types of system users |
Country Status (1)
Country | Link |
---|---|
US (3) | US6405159B2 (en) |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030126583A1 (en) * | 2001-12-28 | 2003-07-03 | Cho Jin Hee | Method and apparatus for identifying software components for use in an object-oriented programming system |
US20050069102A1 (en) * | 2003-09-26 | 2005-03-31 | Sbc Knowledge Ventures, L.P. | VoiceXML and rule engine based switchboard for interactive voice response (IVR) services |
US20050147218A1 (en) * | 2004-01-05 | 2005-07-07 | Sbc Knowledge Ventures, L.P. | System and method for providing access to an interactive service offering |
US20060018443A1 (en) * | 2004-07-23 | 2006-01-26 | Sbc Knowledge Ventures, Lp | Announcement system and method of use |
US20060023863A1 (en) * | 2004-07-28 | 2006-02-02 | Sbc Knowledge Ventures, L.P. | Method and system for mapping caller information to call center agent transactions |
US20060026049A1 (en) * | 2004-07-28 | 2006-02-02 | Sbc Knowledge Ventures, L.P. | Method for identifying and prioritizing customer care automation |
US20060036437A1 (en) * | 2004-08-12 | 2006-02-16 | Sbc Knowledge Ventures, Lp | System and method for targeted tuning module of a speech recognition system |
US20060039547A1 (en) * | 2004-08-18 | 2006-02-23 | Sbc Knowledge Ventures, L.P. | System and method for providing computer assisted user support |
US20060062375A1 (en) * | 2004-09-23 | 2006-03-23 | Sbc Knowledge Ventures, L.P. | System and method for providing product offers at a call center |
US20060072737A1 (en) * | 2004-10-05 | 2006-04-06 | Jonathan Paden | Dynamic load balancing between multiple locations with different telephony system |
US20060093097A1 (en) * | 2004-11-02 | 2006-05-04 | Sbc Knowledge Ventures, L.P. | System and method for identifying telephone callers |
US20060115070A1 (en) * | 2004-11-29 | 2006-06-01 | Sbc Knowledge Ventures, L.P. | System and method for utilizing confidence levels in automated call routing |
US20060126811A1 (en) * | 2004-12-13 | 2006-06-15 | Sbc Knowledge Ventures, L.P. | System and method for routing calls |
US20060126808A1 (en) * | 2004-12-13 | 2006-06-15 | Sbc Knowledge Ventures, L.P. | System and method for measurement of call deflection |
US20060133587A1 (en) * | 2004-12-06 | 2006-06-22 | Sbc Knowledge Ventures, Lp | System and method for speech recognition-enabled automatic call routing |
US20060153345A1 (en) * | 2005-01-10 | 2006-07-13 | Sbc Knowledge Ventures, Lp | System and method for speech-enabled call routing |
US20060159240A1 (en) * | 2005-01-14 | 2006-07-20 | Sbc Knowledge Ventures, Lp | System and method of utilizing a hybrid semantic model for speech recognition |
US20060161431A1 (en) * | 2005-01-14 | 2006-07-20 | Bushey Robert R | System and method for independently recognizing and selecting actions and objects in a speech recognition system |
US20060177040A1 (en) * | 2005-02-04 | 2006-08-10 | Sbc Knowledge Ventures, L.P. | Call center system for multiple transaction selections |
US20060188087A1 (en) * | 2005-02-18 | 2006-08-24 | Sbc Knowledge Ventures, Lp | System and method for caller-controlled music on-hold |
US20060198505A1 (en) * | 2005-03-03 | 2006-09-07 | Sbc Knowledge Ventures, L.P. | System and method for on hold caller-controlled activities and entertainment |
US20060215831A1 (en) * | 2005-03-22 | 2006-09-28 | Sbc Knowledge Ventures, L.P. | System and method for utilizing virtual agents in an interactive voice response application |
US20060215833A1 (en) * | 2005-03-22 | 2006-09-28 | Sbc Knowledge Ventures, L.P. | System and method for automating customer relations in a communications environment |
US20060256932A1 (en) * | 2005-05-13 | 2006-11-16 | Sbc Knowledge Ventures, Lp | System and method of determining call treatment of repeat calls |
US20070019800A1 (en) * | 2005-06-03 | 2007-01-25 | Sbc Knowledge Ventures, Lp | Call routing system and method of using the same |
US20070025528A1 (en) * | 2005-07-07 | 2007-02-01 | Sbc Knowledge Ventures, L.P. | System and method for automated performance monitoring for a call servicing system |
US20070025542A1 (en) * | 2005-07-01 | 2007-02-01 | Sbc Knowledge Ventures, L.P. | System and method of automated order status retrieval |
US20070047718A1 (en) * | 2005-08-25 | 2007-03-01 | Sbc Knowledge Ventures, L.P. | System and method to access content from a speech-enabled automated system |
US20080008308A1 (en) * | 2004-12-06 | 2008-01-10 | Sbc Knowledge Ventures, Lp | System and method for routing calls |
US7668889B2 (en) | 2004-10-27 | 2010-02-23 | At&T Intellectual Property I, Lp | Method and system to combine keyword and natural language search results |
US20100091978A1 (en) * | 2005-06-03 | 2010-04-15 | At&T Intellectual Property I, L.P. | Call routing system and method of using the same |
US20100134619A1 (en) * | 2008-12-01 | 2010-06-03 | International Business Machines Corporation | Evaluating an effectiveness of a monitoring system |
US8548157B2 (en) | 2005-08-29 | 2013-10-01 | At&T Intellectual Property I, L.P. | System and method of managing incoming telephone calls at a call center |
WO2014052736A1 (en) * | 2012-09-27 | 2014-04-03 | Carnegie Mellon University | System and method of using task fingerprinting to predict task performance |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9430211B2 (en) | 2012-08-31 | 2016-08-30 | Jpmorgan Chase Bank, N.A. | System and method for sharing information in a private ecosystem |
US20170372225A1 (en) * | 2016-06-28 | 2017-12-28 | Microsoft Technology Licensing, Llc | Targeting content to underperforming users in clusters |
US10230762B2 (en) | 2012-08-31 | 2019-03-12 | Jpmorgan Chase Bank, N.A. | System and method for sharing information in a private ecosystem |
WO2019074473A1 (en) * | 2017-10-09 | 2019-04-18 | Hewlett-Packard Development Company, L.P. | User capability score |
US10877866B2 (en) * | 2019-05-09 | 2020-12-29 | International Business Machines Corporation | Diagnosing workload performance problems in computer servers |
US11157832B2 (en) * | 2017-12-19 | 2021-10-26 | International Business Machines Corporation | Machine learning system for predicting optimal interruptions based on biometric data collected using wearable devices |
US11165679B2 (en) | 2019-05-09 | 2021-11-02 | International Business Machines Corporation | Establishing consumed resource to consumer relationships in computer servers using micro-trend technology |
Families Citing this family (129)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6405159B2 (en) * | 1998-06-03 | 2002-06-11 | Sbc Technology Resources, Inc. | Method for categorizing, describing and modeling types of system users |
US6694482B1 (en) * | 1998-09-11 | 2004-02-17 | Sbc Technology Resources, Inc. | System and methods for an architectural framework for design of an adaptive, personalized, interactive content delivery system |
US9183306B2 (en) * | 1998-12-18 | 2015-11-10 | Microsoft Technology Licensing, Llc | Automated selection of appropriate information based on a computer user's context |
US6466232B1 (en) * | 1998-12-18 | 2002-10-15 | Tangis Corporation | Method and system for controlling presentation of information to a user based on the user's condition |
US7076737B2 (en) * | 1998-12-18 | 2006-07-11 | Tangis Corporation | Thematic response to a computer user's context, such as by a wearable personal computer |
US7055101B2 (en) * | 1998-12-18 | 2006-05-30 | Tangis Corporation | Thematic response to a computer user's context, such as by a wearable personal computer |
US6801223B1 (en) * | 1998-12-18 | 2004-10-05 | Tangis Corporation | Managing interactions between computer users' context models |
US7137069B2 (en) | 1998-12-18 | 2006-11-14 | Tangis Corporation | Thematic response to a computer user's context, such as by a wearable personal computer |
US7779015B2 (en) * | 1998-12-18 | 2010-08-17 | Microsoft Corporation | Logging and analyzing context attributes |
US6747675B1 (en) * | 1998-12-18 | 2004-06-08 | Tangis Corporation | Mediating conflicts in computer user's context data |
US6920616B1 (en) | 1998-12-18 | 2005-07-19 | Tangis Corporation | Interface for exchanging context data |
US8181113B2 (en) | 1998-12-18 | 2012-05-15 | Microsoft Corporation | Mediating conflicts in computer users context data |
US8225214B2 (en) | 1998-12-18 | 2012-07-17 | Microsoft Corporation | Supplying enhanced computer user's context data |
US7073129B1 (en) | 1998-12-18 | 2006-07-04 | Tangis Corporation | Automated selection of appropriate information based on a computer user's context |
US7107539B2 (en) * | 1998-12-18 | 2006-09-12 | Tangis Corporation | Thematic response to a computer user's context, such as by a wearable personal computer |
US6968333B2 (en) | 2000-04-02 | 2005-11-22 | Tangis Corporation | Soliciting information based on a computer user's context |
US7046263B1 (en) | 1998-12-18 | 2006-05-16 | Tangis Corporation | Requesting computer user's context data |
US7231439B1 (en) | 2000-04-02 | 2007-06-12 | Tangis Corporation | Dynamically swapping modules for determining a computer user's context |
US6842877B2 (en) * | 1998-12-18 | 2005-01-11 | Tangis Corporation | Contextual responses based on automated learning techniques |
US6791580B1 (en) | 1998-12-18 | 2004-09-14 | Tangis Corporation | Supplying notifications related to supply and consumption of user context data |
US6513046B1 (en) | 1999-12-15 | 2003-01-28 | Tangis Corporation | Storing and recalling information to augment human memories |
US7225229B1 (en) | 1998-12-18 | 2007-05-29 | Tangis Corporation | Automated pushing of computer user's context data to clients |
US6633315B1 (en) | 1999-05-20 | 2003-10-14 | Microsoft Corporation | Context-based dynamic user interface elements |
US6567104B1 (en) | 1999-05-20 | 2003-05-20 | Microsoft Corporation | Time-based dynamic user interface elements |
US7224790B1 (en) * | 1999-05-27 | 2007-05-29 | Sbc Technology Resources, Inc. | Method to identify and categorize customer's goals and behaviors within a customer service center environment |
US7086007B1 (en) * | 1999-05-27 | 2006-08-01 | Sbc Technology Resources, Inc. | Method for integrating user models to interface design |
US6581037B1 (en) * | 1999-11-05 | 2003-06-17 | Michael Pak | System and method for analyzing human behavior |
US8321496B2 (en) * | 1999-12-13 | 2012-11-27 | Half.Com, Inc. | User evaluation of content on distributed communication network |
US6744879B1 (en) * | 2000-02-02 | 2004-06-01 | Rockwell Electronic Commerce Corp. | Profit-based method of assigning calls in a transaction processing system |
US8290809B1 (en) | 2000-02-14 | 2012-10-16 | Ebay Inc. | Determining a community rating for a user using feedback ratings of related users in an electronic environment |
US9614934B2 (en) | 2000-02-29 | 2017-04-04 | Paypal, Inc. | Methods and systems for harvesting comments regarding users on a network-based facility |
US7428505B1 (en) | 2000-02-29 | 2008-09-23 | Ebay, Inc. | Method and system for harvesting feedback and comments regarding multiple items from users of a network-based transaction facility |
US6778643B1 (en) | 2000-03-21 | 2004-08-17 | Sbc Technology Resources, Inc. | Interface and method of designing an interface |
US20040006473A1 (en) | 2002-07-02 | 2004-01-08 | Sbc Technology Resources, Inc. | Method and system for automated categorization of statements |
US7464153B1 (en) | 2000-04-02 | 2008-12-09 | Microsoft Corporation | Generating and supplying user context data |
US7606778B2 (en) * | 2000-06-12 | 2009-10-20 | Previsor, Inc. | Electronic predication system for assessing a suitability of job applicants for an employer |
US6655963B1 (en) * | 2000-07-31 | 2003-12-02 | Microsoft Corporation | Methods and apparatus for predicting and selectively collecting preferences based on personality diagnosis |
CA2417863A1 (en) * | 2000-08-03 | 2002-02-14 | Unicru, Inc. | Electronic employee selection systems and methods |
US20020054130A1 (en) | 2000-10-16 | 2002-05-09 | Abbott Kenneth H. | Dynamically displaying current status of tasks |
US20020078152A1 (en) | 2000-12-19 | 2002-06-20 | Barry Boone | Method and apparatus for providing predefined feedback |
US6619961B2 (en) * | 2001-02-23 | 2003-09-16 | John Charlton Baird | Computerized system and method for simultaneously representing and recording dynamic judgments |
US20040015906A1 (en) | 2001-04-30 | 2004-01-22 | Goraya Tanvir Y. | Adaptive dynamic personal modeling system and method |
IES20020336A2 (en) * | 2001-05-10 | 2002-11-13 | Changing Worlds Ltd | Intelligent internet website with hierarchical menu |
WO2003005249A2 (en) * | 2001-07-04 | 2003-01-16 | Kinematik Research Limited | An information management and control system |
US20030167182A1 (en) * | 2001-07-23 | 2003-09-04 | International Business Machines Corporation | Method and apparatus for providing symbolic mode checking of business application requirements |
US7251613B2 (en) * | 2001-09-05 | 2007-07-31 | David Flores | System and method for generating a multi-layered strategy description including integrated implementation requirements |
US20030046125A1 (en) * | 2001-09-05 | 2003-03-06 | Nextstrat, Inc. | System and method for enterprise strategy management |
US6712468B1 (en) | 2001-12-12 | 2004-03-30 | Gregory T. Edwards | Techniques for facilitating use of eye tracking data |
US7305070B2 (en) | 2002-01-30 | 2007-12-04 | At&T Labs, Inc. | Sequential presentation of long instructions in an interactive voice response system |
US7337120B2 (en) * | 2002-02-07 | 2008-02-26 | Accenture Global Services Gmbh | Providing human performance management data and insight |
US6914975B2 (en) * | 2002-02-21 | 2005-07-05 | Sbc Properties, L.P. | Interactive dialog-based training method |
US20050091601A1 (en) * | 2002-03-07 | 2005-04-28 | Raymond Michelle A. | Interaction design system |
US6871163B2 (en) * | 2002-05-31 | 2005-03-22 | Sap Aktiengesellschaft | Behavior-based adaptation of computer systems |
US7624023B2 (en) * | 2002-06-04 | 2009-11-24 | International Business Machines Corporation | Client opportunity modeling tool |
US20040073569A1 (en) * | 2002-09-27 | 2004-04-15 | Sbc Properties, L.P. | System and method for integrating a personal adaptive agent |
AU2003295572A1 (en) * | 2002-11-15 | 2004-06-15 | Axios Partners, Llc | Value innovation management system and methods |
US7610288B2 (en) * | 2003-01-07 | 2009-10-27 | At&T Intellectual Property I, L.P. | Performance management system and method |
US20040210454A1 (en) * | 2003-02-26 | 2004-10-21 | Coughlin Bruce M. | System and method for providing technology data integration services |
US7818192B2 (en) * | 2003-02-28 | 2010-10-19 | Omnex Systems L.L.C. | Quality information management system |
US7895649B1 (en) * | 2003-04-04 | 2011-02-22 | Raytheon Company | Dynamic rule generation for an enterprise intrusion detection system |
US7881493B1 (en) | 2003-04-11 | 2011-02-01 | Eyetools, Inc. | Methods and apparatuses for use of eye interpretation information |
US6963826B2 (en) * | 2003-09-22 | 2005-11-08 | C3I, Inc. | Performance optimizer system and method |
US7716079B2 (en) * | 2003-11-20 | 2010-05-11 | Ebay Inc. | Feedback cancellation in a network-based transaction facility |
US7027586B2 (en) * | 2003-12-18 | 2006-04-11 | Sbc Knowledge Ventures, L.P. | Intelligently routing customer communications |
US7941335B2 (en) * | 2004-01-24 | 2011-05-10 | Inovation Inc. | System and method for performing conjoint analysis |
US20050246241A1 (en) * | 2004-04-30 | 2005-11-03 | Rightnow Technologies, Inc. | Method and system for monitoring successful use of application software |
EP1763806A2 (en) * | 2004-06-25 | 2007-03-21 | Technische Universität Berlin | Simulation system and simulation method for examining the operability of a means of transport |
US20050286709A1 (en) * | 2004-06-28 | 2005-12-29 | Steve Horton | Customer service marketing |
US7350190B2 (en) * | 2004-06-29 | 2008-03-25 | International Business Machines Corporation | Computer implemented modeling and analysis of an application user interface |
EP1791612A2 (en) * | 2004-08-31 | 2007-06-06 | Information in Place, Inc. | Object oriented mixed reality and video game authoring tool system and method background of the invention |
US7539654B2 (en) * | 2005-01-21 | 2009-05-26 | International Business Machines Corporation | User interaction management using an ongoing estimate of user interaction skills |
WO2006091893A2 (en) * | 2005-02-23 | 2006-08-31 | Eyetracking, Inc. | Mental alertness level determination |
US7438418B2 (en) * | 2005-02-23 | 2008-10-21 | Eyetracking, Inc. | Mental alertness and mental proficiency level determination |
US7472097B1 (en) * | 2005-03-23 | 2008-12-30 | Kronos Talent Management Inc. | Employee selection via multiple neural networks |
US8094790B2 (en) | 2005-05-18 | 2012-01-10 | Mattersight Corporation | Method and software for training a customer service representative by analysis of a telephonic interaction between a customer and a contact center |
US7511606B2 (en) * | 2005-05-18 | 2009-03-31 | Lojack Operating Company Lp | Vehicle locating unit with input voltage protection |
US20060265088A1 (en) * | 2005-05-18 | 2006-11-23 | Roger Warford | Method and system for recording an electronic communication and extracting constituent audio data therefrom |
US7995717B2 (en) | 2005-05-18 | 2011-08-09 | Mattersight Corporation | Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto |
US8094803B2 (en) | 2005-05-18 | 2012-01-10 | Mattersight Corporation | Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto |
US8473490B2 (en) * | 2005-09-27 | 2013-06-25 | Match.Com, L.L.C. | System and method for providing a near matches feature in a network environment |
US20070073549A1 (en) * | 2005-09-27 | 2007-03-29 | Match.Com, L.P. | System and method for providing testing and matching in a network environment |
US7673287B2 (en) * | 2005-10-11 | 2010-03-02 | Sap Ag | Testing usability of a software program |
US20070088714A1 (en) * | 2005-10-19 | 2007-04-19 | Edwards Gregory T | Methods and apparatuses for collection, processing, and utilization of viewing data |
WO2007056287A2 (en) * | 2005-11-04 | 2007-05-18 | Eye Tracking, Inc. | Generation of test stimuli in visual media |
US8155446B2 (en) * | 2005-11-04 | 2012-04-10 | Eyetracking, Inc. | Characterizing dynamic regions of digital media data |
US20070121873A1 (en) * | 2005-11-18 | 2007-05-31 | Medlin Jennifer P | Methods, systems, and products for managing communications |
US7760910B2 (en) * | 2005-12-12 | 2010-07-20 | Eyetools, Inc. | Evaluation of visual stimuli using existing viewing data |
US7773731B2 (en) * | 2005-12-14 | 2010-08-10 | At&T Intellectual Property I, L. P. | Methods, systems, and products for dynamically-changing IVR architectures |
US7577664B2 (en) | 2005-12-16 | 2009-08-18 | At&T Intellectual Property I, L.P. | Methods, systems, and products for searching interactive menu prompting system architectures |
US8050392B2 (en) * | 2006-03-17 | 2011-11-01 | At&T Intellectual Property I, L.P. | Methods systems, and products for processing responses in prompting systems |
US20140337938A1 (en) * | 2006-03-17 | 2014-11-13 | Raj Abhyanker | Bookmarking and lassoing in a geo-spatial environment |
US7961856B2 (en) * | 2006-03-17 | 2011-06-14 | At&T Intellectual Property I, L. P. | Methods, systems, and products for processing responses in prompting systems |
US7565339B2 (en) * | 2006-03-31 | 2009-07-21 | Agiledelta, Inc. | Knowledge based encoding of data |
US7865383B2 (en) * | 2006-06-23 | 2011-01-04 | Dennis William Tafoya | System and method for examining, describing, analyzing and/or predicting organization performance in response to events |
US20080167891A1 (en) * | 2006-12-18 | 2008-07-10 | University Of Virginia Patent Foundation | Systems, Devices and Methods for Consumer Segmentation |
US20080184154A1 (en) * | 2007-01-31 | 2008-07-31 | Goraya Tanvir Y | Mathematical simulation of a cause model |
US8023639B2 (en) | 2007-03-30 | 2011-09-20 | Mattersight Corporation | Method and system determining the complexity of a telephonic communication received by a contact center |
US20080240374A1 (en) * | 2007-03-30 | 2008-10-02 | Kelly Conway | Method and system for linking customer conversation channels |
US8718262B2 (en) | 2007-03-30 | 2014-05-06 | Mattersight Corporation | Method and system for automatically routing a telephonic communication base on analytic attributes associated with prior telephonic communication |
US7869586B2 (en) | 2007-03-30 | 2011-01-11 | Eloyalty Corporation | Method and system for aggregating and analyzing data relating to a plurality of interactions between a customer and a contact center and generating business process analytics |
US20080240404A1 (en) * | 2007-03-30 | 2008-10-02 | Kelly Conway | Method and system for aggregating and analyzing data relating to an interaction between a customer and a contact center agent |
US8021298B2 (en) * | 2007-06-28 | 2011-09-20 | Psychological Applications Llc | System and method for mapping pain depth |
US10419611B2 (en) | 2007-09-28 | 2019-09-17 | Mattersight Corporation | System and methods for determining trends in electronic communications |
US8140368B2 (en) * | 2008-04-07 | 2012-03-20 | International Business Machines Corporation | Method and system for routing a task to an employee based on physical and emotional state |
US8504599B1 (en) | 2008-10-07 | 2013-08-06 | Honda Motor Co., Ltd. | Intelligent system for database retrieval |
US9741147B2 (en) * | 2008-12-12 | 2017-08-22 | International Business Machines Corporation | System and method to modify avatar characteristics based on inferred conditions |
US8583563B1 (en) * | 2008-12-23 | 2013-11-12 | Match.Com, L.L.C. | System and method for providing enhanced matching based on personality analysis |
WO2011008855A2 (en) * | 2009-07-14 | 2011-01-20 | Pinchuk Steven G | Method of predicting a plurality of behavioral events and method of displaying information |
US8117054B2 (en) * | 2009-11-20 | 2012-02-14 | Palo Alto Research Center Incorporated | Method for estimating task stress factors from temporal work patterns |
WO2011137935A1 (en) | 2010-05-07 | 2011-11-10 | Ulysses Systems (Uk) Limited | System and method for identifying relevant information for an enterprise |
JP5928344B2 (en) * | 2011-01-27 | 2016-06-01 | 日本電気株式会社 | UI (UserInterface) creation support apparatus, UI creation support method, and program |
US20130019195A1 (en) * | 2011-07-12 | 2013-01-17 | Oracle International Corporation | Aggregating multiple information sources (dashboard4life) |
US8923501B2 (en) * | 2011-07-29 | 2014-12-30 | Avaya Inc. | Method and system for managing contacts in a contact center |
US10083247B2 (en) | 2011-10-01 | 2018-09-25 | Oracle International Corporation | Generating state-driven role-based landing pages |
US9886676B1 (en) | 2012-03-30 | 2018-02-06 | Liberty Mutual Insurance Company | Behavior-based business recommendations |
US9348886B2 (en) * | 2012-12-19 | 2016-05-24 | Facebook, Inc. | Formation and description of user subgroups |
US9191510B2 (en) | 2013-03-14 | 2015-11-17 | Mattersight Corporation | Methods and system for analyzing multichannel electronic communication data |
US9733894B2 (en) * | 2013-07-02 | 2017-08-15 | 24/7 Customer, Inc. | Method and apparatus for facilitating voice user interface design |
US9811830B2 (en) | 2013-07-03 | 2017-11-07 | Google Inc. | Method, medium, and system for online fraud prevention based on user physical location data |
US10628894B1 (en) | 2015-01-28 | 2020-04-21 | Intuit Inc. | Method and system for providing personalized responses to questions received from a user of an electronic tax return preparation system |
US10380656B2 (en) | 2015-02-27 | 2019-08-13 | Ebay Inc. | Dynamic predefined product reviews |
US10176534B1 (en) | 2015-04-20 | 2019-01-08 | Intuit Inc. | Method and system for providing an analytics model architecture to reduce abandonment of tax return preparation sessions by potential customers |
US20170042461A1 (en) * | 2015-07-16 | 2017-02-16 | Battelle Memorial Institute | Techniques to evaluate and enhance cognitive performance |
CN105260414B (en) * | 2015-09-24 | 2018-10-19 | 精硕科技(北京)股份有限公司 | User behavior similarity calculation method and device |
US10937109B1 (en) | 2016-01-08 | 2021-03-02 | Intuit Inc. | Method and technique to calculate and provide confidence score for predicted tax due/refund |
US9805306B1 (en) | 2016-11-23 | 2017-10-31 | Accenture Global Solutions Limited | Cognitive robotics analyzer |
US20190080352A1 (en) * | 2017-09-11 | 2019-03-14 | Adobe Systems Incorporated | Segment Extension Based on Lookalike Selection |
US11204744B1 (en) | 2020-05-26 | 2021-12-21 | International Business Machines Corporation | Multidimensional digital experience analysis |
CN115051967B (en) * | 2022-04-28 | 2023-10-03 | 杭州脸脸会网络技术有限公司 | Data display method and device, electronic device and storage medium |
Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5493608A (en) * | 1994-03-17 | 1996-02-20 | Alpha Logic, Incorporated | Caller adaptive voice response system |
US5535321A (en) * | 1991-02-14 | 1996-07-09 | International Business Machines Corporation | Method and apparatus for variable complexity user interface in a data processing system |
US5724987A (en) * | 1991-09-26 | 1998-03-10 | Sam Technology, Inc. | Neurocognitive adaptive computer-aided training method and system |
US5790117A (en) * | 1992-11-02 | 1998-08-04 | Borland International, Inc. | System and methods for improved program testing |
US5974253A (en) * | 1992-07-22 | 1999-10-26 | Bull S.A. | Using an embedded interpreted language to develop an interactive user-interface description tool |
US5999908A (en) * | 1992-08-06 | 1999-12-07 | Abelow; Daniel H. | Customer-based product design module |
US6044146A (en) * | 1998-02-17 | 2000-03-28 | Genesys Telecommunications Laboratories, Inc. | Method and apparatus for call distribution and override with priority |
US6058435A (en) * | 1997-02-04 | 2000-05-02 | Siemens Information And Communications Networks, Inc. | Apparatus and methods for responding to multimedia communications based on content analysis |
US6173279B1 (en) * | 1998-04-09 | 2001-01-09 | At&T Corp. | Method of using a natural language interface to retrieve information from one or more data resources |
US6223188B1 (en) * | 1996-04-10 | 2001-04-24 | Sun Microsystems, Inc. | Presentation of link information as an aid to hypermedia navigation |
US6295509B1 (en) * | 1997-10-17 | 2001-09-25 | Stanley W. Driskell | Objective, quantitative method for measuring the mental effort of managing a computer-human interface |
US6389400B1 (en) * | 1998-08-20 | 2002-05-14 | Sbc Technology Resources, Inc. | System and methods for intelligent routing of customer requests using customer and agent models |
US6405159B2 (en) * | 1998-06-03 | 2002-06-11 | Sbc Technology Resources, Inc. | Method for categorizing, describing and modeling types of system users |
US6411687B1 (en) * | 1997-11-11 | 2002-06-25 | Mitel Knowledge Corporation | Call routing based on the caller's mood |
US6418216B1 (en) * | 1998-06-09 | 2002-07-09 | International Business Machines Corporation | Caller-controller barge-in telephone service |
US6427142B1 (en) * | 1998-01-06 | 2002-07-30 | Chi Systems, Inc. | Intelligent agent workbench |
US6456619B1 (en) * | 1997-12-04 | 2002-09-24 | Siemens Information And Communication Networks, Inc. | Method and system for supporting a decision tree with placeholder capability |
US6456699B1 (en) * | 1998-11-30 | 2002-09-24 | At&T Corp. | Web-based generation of telephony-based interactive voice response applications |
US6483523B1 (en) * | 1998-05-08 | 2002-11-19 | Institute For Information Industry | Personalized interface browser and its browsing method |
US6487277B2 (en) * | 1997-09-19 | 2002-11-26 | Siemens Information And Communication Networks, Inc. | Apparatus and method for improving the user interface of integrated voice response systems |
US6539080B1 (en) * | 1998-07-14 | 2003-03-25 | Ameritech Corporation | Method and system for providing quick directions |
US20030158655A1 (en) * | 1999-10-19 | 2003-08-21 | American Calcar Inc. | Technique for effective navigation based on user preferences |
US6654447B1 (en) * | 2000-10-13 | 2003-11-25 | Cisco Technology, Inc. | Method and system for pausing a session with an interactive voice response unit |
US6658389B1 (en) * | 2000-03-24 | 2003-12-02 | Ahmet Alpdemir | System, method, and business model for speech-interactive information system having business self-promotion, audio coupon and rating features |
US20040143484A1 (en) * | 2003-01-16 | 2004-07-22 | Viren Kapadia | Systems and methods for distribution of sales leads |
US20040213400A1 (en) * | 2003-01-06 | 2004-10-28 | Golitsin Vladimir G. | Method and apparatus for multimedia interaction routing according to agent capacity sets |
US20050111653A1 (en) * | 2003-04-15 | 2005-05-26 | Robert Joyce | Instant message processing in a customer interaction system |
Family Cites Families (96)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4310727A (en) | 1980-02-04 | 1982-01-12 | Bell Telephone Laboratories, Incorporated | Method of processing special service telephone calls |
JPS6134669A (en) | 1984-07-27 | 1986-02-18 | Hitachi Ltd | Automatic transaction system |
US4922519A (en) | 1986-05-07 | 1990-05-01 | American Telephone And Telegraph Company | Automated operator assistance calls with voice processing |
US4694483A (en) | 1986-06-02 | 1987-09-15 | Innings Telecom Inc. | Computerized system for routing incoming telephone calls to a plurality of agent positions |
US4964077A (en) | 1987-10-06 | 1990-10-16 | International Business Machines Corporation | Method for automatically adjusting help information displayed in an online interactive system |
US5115501A (en) | 1988-11-04 | 1992-05-19 | International Business Machines Corporation | Procedure for automatically customizing the user interface of application programs |
US5204968A (en) | 1989-03-27 | 1993-04-20 | Xerox Corporation | Automatic determination of operator training level for displaying appropriate operator prompts |
US5870308A (en) | 1990-04-06 | 1999-02-09 | Lsi Logic Corporation | Method and system for creating and validating low-level description of electronic design |
US5311422A (en) | 1990-06-28 | 1994-05-10 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | General purpose architecture for intelligent computer-aided training |
US5327529A (en) | 1990-09-24 | 1994-07-05 | Geoworks | Process of designing user's interfaces for application programs |
AU9063891A (en) | 1990-11-20 | 1992-06-11 | Unifi Communications Corporation | Telephone call handling system |
US5323452A (en) | 1990-12-18 | 1994-06-21 | Bell Communications Research, Inc. | Visual programming of telephone network call processing logic |
US5206903A (en) | 1990-12-26 | 1993-04-27 | At&T Bell Laboratories | Automatic call distribution based on matching required skills with agents skills |
US5263167A (en) | 1991-11-22 | 1993-11-16 | International Business Machines Corporation | User interface for a relational database using a task object for defining search queries in response to a profile object which describes user proficiency |
US5734709A (en) | 1992-01-27 | 1998-03-31 | Sprint Communications Co. L.P. | System for customer configuration of call routing in a telecommunications network |
US5335269A (en) | 1992-03-12 | 1994-08-02 | Rockwell International Corporation | Two dimensional routing apparatus in an automatic call director-type system |
US5388198A (en) | 1992-04-16 | 1995-02-07 | Symantec Corporation | Proactive presentation of automating features to a computer user |
US5729600A (en) | 1992-06-25 | 1998-03-17 | Rockwell International Corporation | Automatic call distributor with automated voice responsive call servicing system and method |
DE69327691D1 (en) | 1992-07-30 | 2000-03-02 | Teknekron Infowitch Corp | Method and system for monitoring and / or controlling the performance of an organization |
US5335268A (en) | 1992-10-22 | 1994-08-02 | Mci Communications Corporation | Intelligent routing of special service telephone traffic |
US5353401A (en) * | 1992-11-06 | 1994-10-04 | Ricoh Company, Ltd. | Automatic interface layout generator for database systems |
US5659724A (en) * | 1992-11-06 | 1997-08-19 | Ncr | Interactive data analysis apparatus employing a knowledge base |
US5420975A (en) | 1992-12-28 | 1995-05-30 | International Business Machines Corporation | Method and system for automatic alteration of display of menu options |
US5864844A (en) | 1993-02-18 | 1999-01-26 | Apple Computer, Inc. | System and method for enhancing a user interface with a computer based training tool |
CA2091658A1 (en) | 1993-03-15 | 1994-09-16 | Matthew Lennig | Method and apparatus for automation of directory assistance using speech recognition |
US5586060A (en) * | 1993-06-25 | 1996-12-17 | Sharp Kabushiki Kaisha | Compact electronic equipment having a statistical function |
AU677393B2 (en) | 1993-07-08 | 1997-04-24 | E-Talk Corporation | Method and system for transferring calls and call-related data between a plurality of call centres |
EP0644510B1 (en) | 1993-09-22 | 1999-08-18 | Teknekron Infoswitch Corporation | Telecommunications system monitoring |
CA2179523A1 (en) | 1993-12-23 | 1995-06-29 | David A. Boulton | Method and apparatus for implementing user feedback |
US5519772A (en) | 1994-01-31 | 1996-05-21 | Bell Communications Research, Inc. | Network-based telephone system having interactive capabilities |
US5533107A (en) | 1994-03-01 | 1996-07-02 | Bellsouth Corporation | Method for routing calls based on predetermined assignments of callers geographic locations |
US5561711A (en) | 1994-03-09 | 1996-10-01 | Us West Technologies, Inc. | Predictive calling scheduling system and method |
WO1995027360A1 (en) | 1994-03-31 | 1995-10-12 | Citibank, N.A. | Interactive voice response system |
US5537470A (en) | 1994-04-06 | 1996-07-16 | At&T Corp. | Method and apparatus for handling in-bound telemarketing calls |
US5724262A (en) | 1994-05-31 | 1998-03-03 | Paradyne Corporation | Method for measuring the usability of a system and for task analysis and re-engineering |
US5586171A (en) | 1994-07-07 | 1996-12-17 | Bell Atlantic Network Services, Inc. | Selection of a voice recognition data base responsive to video data |
JP2866310B2 (en) | 1994-08-05 | 1999-03-08 | ケイディディ株式会社 | International call termination control device |
US5706334A (en) | 1994-08-18 | 1998-01-06 | Lucent Technologies Inc. | Apparatus for providing a graphical control interface |
US5819221A (en) | 1994-08-31 | 1998-10-06 | Texas Instruments Incorporated | Speech recognition using clustered between word and/or phrase coarticulation |
US5530744A (en) | 1994-09-20 | 1996-06-25 | At&T Corp. | Method and system for dynamic customized call routing |
US5600781A (en) | 1994-09-30 | 1997-02-04 | Intel Corporation | Method and apparatus for creating a portable personalized operating environment |
US5586219A (en) | 1994-09-30 | 1996-12-17 | Yufik; Yan M. | Probabilistic resource allocation system with self-adaptive capability |
US5594791A (en) | 1994-10-05 | 1997-01-14 | Inventions, Inc. | Method and apparatus for providing result-oriented customer service |
US5615323A (en) * | 1994-11-04 | 1997-03-25 | Concord Communications, Inc. | Displaying resource performance and utilization information |
US5758257A (en) | 1994-11-29 | 1998-05-26 | Herz; Frederick | System and method for scheduling broadcast of and access to video programs and other data using customer profiles |
US5832430A (en) | 1994-12-29 | 1998-11-03 | Lucent Technologies, Inc. | Devices and methods for speech recognition of vocabulary words with simultaneous detection and verification |
US5710884A (en) | 1995-03-29 | 1998-01-20 | Intel Corporation | System for automatically updating personal profile server with updates to additional user information gathered from monitoring user's electronic consuming habits generated on computer during use |
EP0740450B1 (en) | 1995-04-24 | 2006-06-14 | International Business Machines Corporation | Method and apparatus for skill-based routing in a call center |
US5657383A (en) | 1995-06-06 | 1997-08-12 | Lucent Technologies Inc. | Flexible customer controlled telecommunications handling |
US5809282A (en) | 1995-06-07 | 1998-09-15 | Grc International, Inc. | Automated network simulation and optimization system |
US5740549A (en) | 1995-06-12 | 1998-04-14 | Pointcast, Inc. | Information and advertising distribution system and method |
JP3453456B2 (en) | 1995-06-19 | 2003-10-06 | キヤノン株式会社 | State sharing model design method and apparatus, and speech recognition method and apparatus using the state sharing model |
US5684872A (en) | 1995-07-21 | 1997-11-04 | Lucent Technologies Inc. | Prediction of a caller's motivation as a basis for selecting treatment of an incoming call |
US6088429A (en) | 1998-04-07 | 2000-07-11 | Mumps Audiofax, Inc. | Interactive telephony system |
US5675707A (en) | 1995-09-15 | 1997-10-07 | At&T | Automated call router system and method |
US5832428A (en) | 1995-10-04 | 1998-11-03 | Apple Computer, Inc. | Search engine for phrase recognition based on prefix/body/suffix architecture |
US5771276A (en) | 1995-10-10 | 1998-06-23 | Ast Research, Inc. | Voice templates for interactive voice mail and voice response system |
US6061433A (en) | 1995-10-19 | 2000-05-09 | Intervoice Limited Partnership | Dynamically changeable menus based on externally available data |
US5802526A (en) | 1995-11-15 | 1998-09-01 | Microsoft Corporation | System and method for graphically displaying and navigating through an interactive voice response menu |
US5821936A (en) | 1995-11-20 | 1998-10-13 | Siemens Business Communication Systems, Inc. | Interface method and system for sequencing display menu items |
US5848396A (en) * | 1996-04-26 | 1998-12-08 | Freedom Of Information, Inc. | Method and apparatus for determining behavioral profile of a computer user |
US5727950A (en) * | 1996-05-22 | 1998-03-17 | Netsage Corporation | Agent based instruction system and method |
US6014638A (en) | 1996-05-29 | 2000-01-11 | America Online, Inc. | System for customizing computer displays in accordance with user preferences |
US5901214A (en) | 1996-06-10 | 1999-05-04 | Murex Securities, Ltd. | One number intelligent call processing system |
US6092105A (en) | 1996-07-12 | 2000-07-18 | Intraware, Inc. | System and method for vending retail software and other sets of information to end users |
US5822744A (en) | 1996-07-15 | 1998-10-13 | Kesel; Brad | Consumer comment reporting apparatus and method |
US6157808A (en) | 1996-07-17 | 2000-12-05 | Gpu, Inc. | Computerized employee certification and training system |
US5757644A (en) | 1996-07-25 | 1998-05-26 | Eis International, Inc. | Voice interactive call center training method using actual screens and screen logic |
US5864605A (en) | 1996-08-22 | 1999-01-26 | At&T Corp | Voice menu optimization method and system |
US5884029A (en) * | 1996-11-14 | 1999-03-16 | International Business Machines Corporation | User interaction with intelligent virtual objects, avatars, which interact with other avatars controlled by different users |
US5793368A (en) | 1996-11-14 | 1998-08-11 | Triteal Corporation | Method for dynamically switching between visual styles |
US5999611A (en) | 1996-11-19 | 1999-12-07 | Stentor Resource Centre Inc. | Subscriber interface for accessing and operating personal communication services |
US6148063A (en) | 1996-11-29 | 2000-11-14 | Nortel Networks Corporation | Semi-interruptible messages for telephone systems making voice announcements |
US5903641A (en) | 1997-01-28 | 1999-05-11 | Lucent Technologies Inc. | Automatic dynamic changing of agents' call-handling assignments |
US5899992A (en) | 1997-02-14 | 1999-05-04 | International Business Machines Corporation | Scalable set oriented classifier |
US5923745A (en) | 1997-02-28 | 1999-07-13 | Teknekron Infoswitch Corporation | Routing calls to call centers |
US5953406A (en) | 1997-05-20 | 1999-09-14 | Mci Communications Corporation | Generalized customer profile editor for call center services |
US6341267B1 (en) * | 1997-07-02 | 2002-01-22 | Enhancement Of Human Potential, Inc. | Methods, systems and apparatuses for matching individuals with behavioral requirements and for managing providers of services to evaluate or increase individuals' behavioral capabilities |
US6044355A (en) | 1997-07-09 | 2000-03-28 | Iex Corporation | Skills-based scheduling for telephone call centers |
US5974412A (en) * | 1997-09-24 | 1999-10-26 | Sapient Health Network | Intelligent query system for automatically indexing information in a database and automatically categorizing users |
US6134315A (en) | 1997-09-30 | 2000-10-17 | Genesys Telecommunications Laboratories, Inc. | Metadata-based network routing |
US6035336A (en) | 1997-10-17 | 2000-03-07 | International Business Machines Corporation | Audio ticker system and method for presenting push information including pre-recorded audio |
US6055542A (en) * | 1997-10-29 | 2000-04-25 | International Business Machines Corporation | System and method for displaying the contents of a web page based on a user's interests |
US6016336A (en) | 1997-11-18 | 2000-01-18 | At&T Corp | Interactive voice response system with call trainable routing |
US5943416A (en) | 1998-02-17 | 1999-08-24 | Genesys Telecommunications Laboratories, Inc. | Automated survey control routine in a call center environment |
US6170011B1 (en) | 1998-09-11 | 2001-01-02 | Genesys Telecommunications Laboratories, Inc. | Method and apparatus for determining and initiating interaction directionality within a multimedia communication center |
US6166732A (en) * | 1998-02-24 | 2000-12-26 | Microsoft Corporation | Distributed object oriented multi-user domain with multimedia presentations |
US6185534B1 (en) | 1998-03-23 | 2001-02-06 | Microsoft Corporation | Modeling emotion and personality in a computer user interface |
US6173053B1 (en) | 1998-04-09 | 2001-01-09 | Avaya Technology Corp. | Optimizing call-center performance by using predictive data to distribute calls among agents |
US6134530A (en) | 1998-04-17 | 2000-10-17 | Andersen Consulting Llp | Rule based routing system and method for a virtual sales and service center |
US6099320A (en) | 1998-07-06 | 2000-08-08 | Papadopoulos; Anastasius | Authoring system and method for computer-based training |
EP1099182A4 (en) | 1998-07-31 | 2001-10-10 | Gary J Summers | Management training simulation method and system |
US6128380A (en) | 1998-08-24 | 2000-10-03 | Siemens Information And Communication, Networks, Inc. | Automatic call distribution and training system |
US6448980B1 (en) * | 1998-10-09 | 2002-09-10 | International Business Machines Corporation | Personalizing rich media presentations based on user response to the presentation |
US6067538A (en) | 1998-12-22 | 2000-05-23 | Ac Properties B.V. | System, method and article of manufacture for a simulation enabled focused feedback tutorial system |
US6104790A (en) | 1999-01-29 | 2000-08-15 | International Business Machines Corporation | Graphical voice response system and method therefor |
-
1998
- 1998-06-03 US US09/089,403 patent/US6405159B2/en not_active Expired - Lifetime
-
2002
- 2002-04-30 US US10/134,430 patent/US6853966B2/en not_active Expired - Lifetime
-
2004
- 2004-08-18 US US10/920,201 patent/US20050015744A1/en not_active Abandoned
Patent Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5535321A (en) * | 1991-02-14 | 1996-07-09 | International Business Machines Corporation | Method and apparatus for variable complexity user interface in a data processing system |
US5724987A (en) * | 1991-09-26 | 1998-03-10 | Sam Technology, Inc. | Neurocognitive adaptive computer-aided training method and system |
US5974253A (en) * | 1992-07-22 | 1999-10-26 | Bull S.A. | Using an embedded interpreted language to develop an interactive user-interface description tool |
US5999908A (en) * | 1992-08-06 | 1999-12-07 | Abelow; Daniel H. | Customer-based product design module |
US5790117A (en) * | 1992-11-02 | 1998-08-04 | Borland International, Inc. | System and methods for improved program testing |
US5493608A (en) * | 1994-03-17 | 1996-02-20 | Alpha Logic, Incorporated | Caller adaptive voice response system |
US6223188B1 (en) * | 1996-04-10 | 2001-04-24 | Sun Microsystems, Inc. | Presentation of link information as an aid to hypermedia navigation |
US6058435A (en) * | 1997-02-04 | 2000-05-02 | Siemens Information And Communications Networks, Inc. | Apparatus and methods for responding to multimedia communications based on content analysis |
US6487277B2 (en) * | 1997-09-19 | 2002-11-26 | Siemens Information And Communication Networks, Inc. | Apparatus and method for improving the user interface of integrated voice response systems |
US6295509B1 (en) * | 1997-10-17 | 2001-09-25 | Stanley W. Driskell | Objective, quantitative method for measuring the mental effort of managing a computer-human interface |
US6411687B1 (en) * | 1997-11-11 | 2002-06-25 | Mitel Knowledge Corporation | Call routing based on the caller's mood |
US6456619B1 (en) * | 1997-12-04 | 2002-09-24 | Siemens Information And Communication Networks, Inc. | Method and system for supporting a decision tree with placeholder capability |
US6427142B1 (en) * | 1998-01-06 | 2002-07-30 | Chi Systems, Inc. | Intelligent agent workbench |
US6044146A (en) * | 1998-02-17 | 2000-03-28 | Genesys Telecommunications Laboratories, Inc. | Method and apparatus for call distribution and override with priority |
US6173279B1 (en) * | 1998-04-09 | 2001-01-09 | At&T Corp. | Method of using a natural language interface to retrieve information from one or more data resources |
US6483523B1 (en) * | 1998-05-08 | 2002-11-19 | Institute For Information Industry | Personalized interface browser and its browsing method |
US6405159B2 (en) * | 1998-06-03 | 2002-06-11 | Sbc Technology Resources, Inc. | Method for categorizing, describing and modeling types of system users |
US6418216B1 (en) * | 1998-06-09 | 2002-07-09 | International Business Machines Corporation | Caller-controller barge-in telephone service |
US6539080B1 (en) * | 1998-07-14 | 2003-03-25 | Ameritech Corporation | Method and system for providing quick directions |
US6389400B1 (en) * | 1998-08-20 | 2002-05-14 | Sbc Technology Resources, Inc. | System and methods for intelligent routing of customer requests using customer and agent models |
US6456699B1 (en) * | 1998-11-30 | 2002-09-24 | At&T Corp. | Web-based generation of telephony-based interactive voice response applications |
US20030158655A1 (en) * | 1999-10-19 | 2003-08-21 | American Calcar Inc. | Technique for effective navigation based on user preferences |
US6658389B1 (en) * | 2000-03-24 | 2003-12-02 | Ahmet Alpdemir | System, method, and business model for speech-interactive information system having business self-promotion, audio coupon and rating features |
US6654447B1 (en) * | 2000-10-13 | 2003-11-25 | Cisco Technology, Inc. | Method and system for pausing a session with an interactive voice response unit |
US20040213400A1 (en) * | 2003-01-06 | 2004-10-28 | Golitsin Vladimir G. | Method and apparatus for multimedia interaction routing according to agent capacity sets |
US20040143484A1 (en) * | 2003-01-16 | 2004-07-22 | Viren Kapadia | Systems and methods for distribution of sales leads |
US20050111653A1 (en) * | 2003-04-15 | 2005-05-26 | Robert Joyce | Instant message processing in a customer interaction system |
Cited By (95)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030126583A1 (en) * | 2001-12-28 | 2003-07-03 | Cho Jin Hee | Method and apparatus for identifying software components for use in an object-oriented programming system |
US7162708B2 (en) * | 2001-12-28 | 2007-01-09 | Electronics And Telecommunications Research Institute | Method and apparatus for identifying software components for use in an object-oriented programming system |
US20050069102A1 (en) * | 2003-09-26 | 2005-03-31 | Sbc Knowledge Ventures, L.P. | VoiceXML and rule engine based switchboard for interactive voice response (IVR) services |
US8090086B2 (en) | 2003-09-26 | 2012-01-03 | At&T Intellectual Property I, L.P. | VoiceXML and rule engine based switchboard for interactive voice response (IVR) services |
US20050147218A1 (en) * | 2004-01-05 | 2005-07-07 | Sbc Knowledge Ventures, L.P. | System and method for providing access to an interactive service offering |
US20080027730A1 (en) * | 2004-01-05 | 2008-01-31 | Sbc Knowledge Ventures, L.P. | System and method for providing access to an interactive service offering |
US20060018443A1 (en) * | 2004-07-23 | 2006-01-26 | Sbc Knowledge Ventures, Lp | Announcement system and method of use |
US7936861B2 (en) | 2004-07-23 | 2011-05-03 | At&T Intellectual Property I, L.P. | Announcement system and method of use |
US20060023863A1 (en) * | 2004-07-28 | 2006-02-02 | Sbc Knowledge Ventures, L.P. | Method and system for mapping caller information to call center agent transactions |
US20060026049A1 (en) * | 2004-07-28 | 2006-02-02 | Sbc Knowledge Ventures, L.P. | Method for identifying and prioritizing customer care automation |
US8165281B2 (en) | 2004-07-28 | 2012-04-24 | At&T Intellectual Property I, L.P. | Method and system for mapping caller information to call center agent transactions |
US20090287484A1 (en) * | 2004-08-12 | 2009-11-19 | At&T Intellectual Property I, L.P. | System and Method for Targeted Tuning of a Speech Recognition System |
US9368111B2 (en) | 2004-08-12 | 2016-06-14 | Interactions Llc | System and method for targeted tuning of a speech recognition system |
US8751232B2 (en) | 2004-08-12 | 2014-06-10 | At&T Intellectual Property I, L.P. | System and method for targeted tuning of a speech recognition system |
US8401851B2 (en) | 2004-08-12 | 2013-03-19 | At&T Intellectual Property I, L.P. | System and method for targeted tuning of a speech recognition system |
US20060036437A1 (en) * | 2004-08-12 | 2006-02-16 | Sbc Knowledge Ventures, Lp | System and method for targeted tuning module of a speech recognition system |
US20060039547A1 (en) * | 2004-08-18 | 2006-02-23 | Sbc Knowledge Ventures, L.P. | System and method for providing computer assisted user support |
US20060062375A1 (en) * | 2004-09-23 | 2006-03-23 | Sbc Knowledge Ventures, L.P. | System and method for providing product offers at a call center |
US8660256B2 (en) | 2004-10-05 | 2014-02-25 | At&T Intellectual Property, L.P. | Dynamic load balancing between multiple locations with different telephony system |
US20070165830A1 (en) * | 2004-10-05 | 2007-07-19 | Sbc Knowledge Ventures, Lp | Dynamic load balancing between multiple locations with different telephony system |
US8102992B2 (en) | 2004-10-05 | 2012-01-24 | At&T Intellectual Property, L.P. | Dynamic load balancing between multiple locations with different telephony system |
US20060072737A1 (en) * | 2004-10-05 | 2006-04-06 | Jonathan Paden | Dynamic load balancing between multiple locations with different telephony system |
US9047377B2 (en) | 2004-10-27 | 2015-06-02 | At&T Intellectual Property I, L.P. | Method and system to combine keyword and natural language search results |
US8667005B2 (en) | 2004-10-27 | 2014-03-04 | At&T Intellectual Property I, L.P. | Method and system to combine keyword and natural language search results |
US8321446B2 (en) | 2004-10-27 | 2012-11-27 | At&T Intellectual Property I, L.P. | Method and system to combine keyword results and natural language search results |
US7668889B2 (en) | 2004-10-27 | 2010-02-23 | At&T Intellectual Property I, Lp | Method and system to combine keyword and natural language search results |
US20060093097A1 (en) * | 2004-11-02 | 2006-05-04 | Sbc Knowledge Ventures, L.P. | System and method for identifying telephone callers |
US7657005B2 (en) | 2004-11-02 | 2010-02-02 | At&T Intellectual Property I, L.P. | System and method for identifying telephone callers |
US20060115070A1 (en) * | 2004-11-29 | 2006-06-01 | Sbc Knowledge Ventures, L.P. | System and method for utilizing confidence levels in automated call routing |
US7724889B2 (en) | 2004-11-29 | 2010-05-25 | At&T Intellectual Property I, L.P. | System and method for utilizing confidence levels in automated call routing |
US9112972B2 (en) | 2004-12-06 | 2015-08-18 | Interactions Llc | System and method for processing speech |
US20100185443A1 (en) * | 2004-12-06 | 2010-07-22 | At&T Intellectual Property I, L.P. | System and Method for Processing Speech |
US20060133587A1 (en) * | 2004-12-06 | 2006-06-22 | Sbc Knowledge Ventures, Lp | System and method for speech recognition-enabled automatic call routing |
US7864942B2 (en) | 2004-12-06 | 2011-01-04 | At&T Intellectual Property I, L.P. | System and method for routing calls |
US20080008308A1 (en) * | 2004-12-06 | 2008-01-10 | Sbc Knowledge Ventures, Lp | System and method for routing calls |
US7720203B2 (en) | 2004-12-06 | 2010-05-18 | At&T Intellectual Property I, L.P. | System and method for processing speech |
US8306192B2 (en) | 2004-12-06 | 2012-11-06 | At&T Intellectual Property I, L.P. | System and method for processing speech |
US9350862B2 (en) | 2004-12-06 | 2016-05-24 | Interactions Llc | System and method for processing speech |
US20060126808A1 (en) * | 2004-12-13 | 2006-06-15 | Sbc Knowledge Ventures, L.P. | System and method for measurement of call deflection |
US20060126811A1 (en) * | 2004-12-13 | 2006-06-15 | Sbc Knowledge Ventures, L.P. | System and method for routing calls |
US20060153345A1 (en) * | 2005-01-10 | 2006-07-13 | Sbc Knowledge Ventures, Lp | System and method for speech-enabled call routing |
US8503662B2 (en) | 2005-01-10 | 2013-08-06 | At&T Intellectual Property I, L.P. | System and method for speech-enabled call routing |
US7751551B2 (en) | 2005-01-10 | 2010-07-06 | At&T Intellectual Property I, L.P. | System and method for speech-enabled call routing |
US9088652B2 (en) | 2005-01-10 | 2015-07-21 | At&T Intellectual Property I, L.P. | System and method for speech-enabled call routing |
US8824659B2 (en) | 2005-01-10 | 2014-09-02 | At&T Intellectual Property I, L.P. | System and method for speech-enabled call routing |
US7966176B2 (en) | 2005-01-14 | 2011-06-21 | At&T Intellectual Property I, L.P. | System and method for independently recognizing and selecting actions and objects in a speech recognition system |
US20060161431A1 (en) * | 2005-01-14 | 2006-07-20 | Bushey Robert R | System and method for independently recognizing and selecting actions and objects in a speech recognition system |
US20060159240A1 (en) * | 2005-01-14 | 2006-07-20 | Sbc Knowledge Ventures, Lp | System and method of utilizing a hybrid semantic model for speech recognition |
US20090067590A1 (en) * | 2005-01-14 | 2009-03-12 | Sbc Knowledge Ventures, L.P. | System and method of utilizing a hybrid semantic model for speech recognition |
US20100040207A1 (en) * | 2005-01-14 | 2010-02-18 | At&T Intellectual Property I, L.P. | System and Method for Independently Recognizing and Selecting Actions and Objects in a Speech Recognition System |
US20060177040A1 (en) * | 2005-02-04 | 2006-08-10 | Sbc Knowledge Ventures, L.P. | Call center system for multiple transaction selections |
US8068596B2 (en) | 2005-02-04 | 2011-11-29 | At&T Intellectual Property I, L.P. | Call center system for multiple transaction selections |
US20060188087A1 (en) * | 2005-02-18 | 2006-08-24 | Sbc Knowledge Ventures, Lp | System and method for caller-controlled music on-hold |
US20060198505A1 (en) * | 2005-03-03 | 2006-09-07 | Sbc Knowledge Ventures, L.P. | System and method for on hold caller-controlled activities and entertainment |
US8130936B2 (en) | 2005-03-03 | 2012-03-06 | At&T Intellectual Property I, L.P. | System and method for on hold caller-controlled activities and entertainment |
US20060215833A1 (en) * | 2005-03-22 | 2006-09-28 | Sbc Knowledge Ventures, L.P. | System and method for automating customer relations in a communications environment |
US8488770B2 (en) | 2005-03-22 | 2013-07-16 | At&T Intellectual Property I, L.P. | System and method for automating customer relations in a communications environment |
US7933399B2 (en) | 2005-03-22 | 2011-04-26 | At&T Intellectual Property I, L.P. | System and method for utilizing virtual agents in an interactive voice response application |
US8223954B2 (en) | 2005-03-22 | 2012-07-17 | At&T Intellectual Property I, L.P. | System and method for automating customer relations in a communications environment |
US20060215831A1 (en) * | 2005-03-22 | 2006-09-28 | Sbc Knowledge Ventures, L.P. | System and method for utilizing virtual agents in an interactive voice response application |
US20060256932A1 (en) * | 2005-05-13 | 2006-11-16 | Sbc Knowledge Ventures, Lp | System and method of determining call treatment of repeat calls |
US20100054449A1 (en) * | 2005-05-13 | 2010-03-04 | At&T Intellectual Property L,L,P. | System and Method of Determining Call Treatment of Repeat Calls |
US8295469B2 (en) | 2005-05-13 | 2012-10-23 | At&T Intellectual Property I, L.P. | System and method of determining call treatment of repeat calls |
US8879714B2 (en) | 2005-05-13 | 2014-11-04 | At&T Intellectual Property I, L.P. | System and method of determining call treatment of repeat calls |
US8005204B2 (en) | 2005-06-03 | 2011-08-23 | At&T Intellectual Property I, L.P. | Call routing system and method of using the same |
US20070019800A1 (en) * | 2005-06-03 | 2007-01-25 | Sbc Knowledge Ventures, Lp | Call routing system and method of using the same |
US8280030B2 (en) | 2005-06-03 | 2012-10-02 | At&T Intellectual Property I, Lp | Call routing system and method of using the same |
US8619966B2 (en) | 2005-06-03 | 2013-12-31 | At&T Intellectual Property I, L.P. | Call routing system and method of using the same |
US20100091978A1 (en) * | 2005-06-03 | 2010-04-15 | At&T Intellectual Property I, L.P. | Call routing system and method of using the same |
US20070025542A1 (en) * | 2005-07-01 | 2007-02-01 | Sbc Knowledge Ventures, L.P. | System and method of automated order status retrieval |
US8503641B2 (en) | 2005-07-01 | 2013-08-06 | At&T Intellectual Property I, L.P. | System and method of automated order status retrieval |
US9729719B2 (en) | 2005-07-01 | 2017-08-08 | At&T Intellectual Property I, L.P. | System and method of automated order status retrieval |
US8731165B2 (en) | 2005-07-01 | 2014-05-20 | At&T Intellectual Property I, L.P. | System and method of automated order status retrieval |
US9088657B2 (en) | 2005-07-01 | 2015-07-21 | At&T Intellectual Property I, L.P. | System and method of automated order status retrieval |
US20070025528A1 (en) * | 2005-07-07 | 2007-02-01 | Sbc Knowledge Ventures, L.P. | System and method for automated performance monitoring for a call servicing system |
US8175253B2 (en) | 2005-07-07 | 2012-05-08 | At&T Intellectual Property I, L.P. | System and method for automated performance monitoring for a call servicing system |
US8526577B2 (en) | 2005-08-25 | 2013-09-03 | At&T Intellectual Property I, L.P. | System and method to access content from a speech-enabled automated system |
US20070047718A1 (en) * | 2005-08-25 | 2007-03-01 | Sbc Knowledge Ventures, L.P. | System and method to access content from a speech-enabled automated system |
US8548157B2 (en) | 2005-08-29 | 2013-10-01 | At&T Intellectual Property I, L.P. | System and method of managing incoming telephone calls at a call center |
US20100134619A1 (en) * | 2008-12-01 | 2010-06-03 | International Business Machines Corporation | Evaluating an effectiveness of a monitoring system |
US9111237B2 (en) * | 2008-12-01 | 2015-08-18 | International Business Machines Corporation | Evaluating an effectiveness of a monitoring system |
US10230762B2 (en) | 2012-08-31 | 2019-03-12 | Jpmorgan Chase Bank, N.A. | System and method for sharing information in a private ecosystem |
US10630722B2 (en) | 2012-08-31 | 2020-04-21 | Jpmorgan Chase Bank, N.A. | System and method for sharing information in a private ecosystem |
US9430211B2 (en) | 2012-08-31 | 2016-08-30 | Jpmorgan Chase Bank, N.A. | System and method for sharing information in a private ecosystem |
US20150213392A1 (en) * | 2012-09-27 | 2015-07-30 | Carnegie Mellon University | System and Method of Using Task Fingerprinting to Predict Task Performance |
WO2014052736A1 (en) * | 2012-09-27 | 2014-04-03 | Carnegie Mellon University | System and method of using task fingerprinting to predict task performance |
US20150254594A1 (en) * | 2012-09-27 | 2015-09-10 | Carnegie Mellon University | System for Interactively Visualizing and Evaluating User Behavior and Output |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US20170372225A1 (en) * | 2016-06-28 | 2017-12-28 | Microsoft Technology Licensing, Llc | Targeting content to underperforming users in clusters |
WO2019074473A1 (en) * | 2017-10-09 | 2019-04-18 | Hewlett-Packard Development Company, L.P. | User capability score |
CN111201541A (en) * | 2017-10-09 | 2020-05-26 | 惠普发展公司,有限责任合伙企业 | User competency scoring |
US11157832B2 (en) * | 2017-12-19 | 2021-10-26 | International Business Machines Corporation | Machine learning system for predicting optimal interruptions based on biometric data collected using wearable devices |
US10877866B2 (en) * | 2019-05-09 | 2020-12-29 | International Business Machines Corporation | Diagnosing workload performance problems in computer servers |
US11165679B2 (en) | 2019-05-09 | 2021-11-02 | International Business Machines Corporation | Establishing consumed resource to consumer relationships in computer servers using micro-trend technology |
Also Published As
Publication number | Publication date |
---|---|
US6405159B2 (en) | 2002-06-11 |
US6853966B2 (en) | 2005-02-08 |
US20020133394A1 (en) | 2002-09-19 |
US20010011211A1 (en) | 2001-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6853966B2 (en) | Method for categorizing, describing and modeling types of system users | |
US11836338B2 (en) | System and method for building and managing user experience for computer software interfaces | |
Pijpers et al. | Senior executives' use of information technology | |
Karat | Software evaluation methodologies | |
US7908166B2 (en) | System and method to quantify consumer preferences using attributes | |
US7340409B1 (en) | Computer based process for strategy evaluation and optimization based on customer desired outcomes and predictive metrics | |
Kim et al. | A methodology for developing a usability index of consumer electronic products | |
US20020196277A1 (en) | Method and system for automating the creation of customer-centric interfaces | |
US7827203B2 (en) | System to determine respondent-specific product attribute levels | |
EP1176527A1 (en) | Product design process and product design apparatus | |
Zülch et al. | Usability evaluation of user interfaces with the computer-aided evaluation tool PROKUS | |
Mello | Right process, right product | |
JP4479343B2 (en) | Programs and devices for system usability study support | |
Lee et al. | Customer needs and technology analysis in new product development via fuzzy QFD and Delphi | |
Arrasid et al. | Improvement I-gracias Mobile Website Using User-Centered Design (ucd) Methods | |
Preece | Supporting user testing in human-computer interaction design | |
US20220108257A1 (en) | System for creating ideal experience metrics and evaluation platform | |
Stolk | From Fragmentation to Uniformity: Towards a Standardized Decision Process for UX Evaluation Methods in a Large End-To-End Agency | |
Dzida et al. | ERGOguide | |
Vergeer | Measuring ease of use: analysis, implementation and output of usability evaluation tools and procedures | |
Hartrum et al. | Evaluating user satisfaction of an interactive computer program | |
WO2023043742A1 (en) | Systems and methods for the generation and analysis of a user experience score | |
Seebode et al. | Assessing the Quality and Usability of Multimodal Systems | |
Harris et al. | OWLKNEST: An expert system to provide operator workload guidance | |
Lee | Assessing interactive system effectiveness with usability design heuristics and Markov models of user behavior |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |