ETSI ETR 261-5-1996 Human Factors (HF) Assessment and Definition of a Harmonized Minimum Man-Machine Interface (MMI) for Accessing and Controlling Public Network Based Supplementar_1.pdf

上传人:lawfemale396 文档编号:731666 上传时间:2019-01-08 格式:PDF 页数:61 大小:2.23MB
下载 相关 举报
ETSI ETR 261-5-1996 Human Factors (HF) Assessment and Definition of a Harmonized Minimum Man-Machine Interface (MMI) for Accessing and Controlling Public Network Based Supplementar_1.pdf_第1页
第1页 / 共61页
ETSI ETR 261-5-1996 Human Factors (HF) Assessment and Definition of a Harmonized Minimum Man-Machine Interface (MMI) for Accessing and Controlling Public Network Based Supplementar_1.pdf_第2页
第2页 / 共61页
ETSI ETR 261-5-1996 Human Factors (HF) Assessment and Definition of a Harmonized Minimum Man-Machine Interface (MMI) for Accessing and Controlling Public Network Based Supplementar_1.pdf_第3页
第3页 / 共61页
ETSI ETR 261-5-1996 Human Factors (HF) Assessment and Definition of a Harmonized Minimum Man-Machine Interface (MMI) for Accessing and Controlling Public Network Based Supplementar_1.pdf_第4页
第4页 / 共61页
ETSI ETR 261-5-1996 Human Factors (HF) Assessment and Definition of a Harmonized Minimum Man-Machine Interface (MMI) for Accessing and Controlling Public Network Based Supplementar_1.pdf_第5页
第5页 / 共61页
点击查看更多>>
资源描述

1、ETSI ETR*2bL-5 96 3400855 OLidl1799 T43 m- ETSI c ETR 261-5 October 1996 Source: ETSI TC-HF Reference: DTWHF-01028-5 ICs: 33.020 Key words: Keypad, MMI, supplementary service Human Factors (HF); Assessment and definition of a harmonized minimum man-machine interface (MMI) for accessing and controlli

2、ng public network based supplementary services; Part 5: Experimental evaluation of the CEPT and GSM code schemes ETSI European Telecommunications Standards Institute ETSI Secretariat Postal address: F-O6921 Sophia Antipolis CEDEX - FRANCE Office address: 650 Route des Lucioles - Sophia Antipolis - V

3、albonne - FRANCE X.400: c=fr, a=atlas, p=etsi, ssecretariat - Internet: secretariat Q etsi.fr Tel.: +33 4 92 94 42 O0 - Fax: +33 4 93 65 47 16 Copyright Notitication: No part may be reproduced except as authorized by written permission. The copyright and the foregoing restriction extend to reproduct

4、ion in all media. O European Telecommunications Standards Institute 1996. All rights reserved. ETSI ETR* 95 percentile Il+ = 90 - 95 Yo O = O-LeveVGCSEs A = A-Level II = 75 - 90 % 111- = 25 - 50 % Ill+ = 50 - 75 % IV = 10 - 25 % D = Degree IV- = 5 - 10 % V = c 5 percentile The key differences that m

5、ight effect the data are: - - - Group 2 is younger than the other three groups (there is one 25 year old; without him the mean age is 39,9, and the range is 35 - 45); Groups 2 and 4 include the majority of degree level subjects; Group 4 has the narrowest range of Ravens scores (il to Ili-). 4.3 Mate

6、rials A training booklet was prepared for each code scheme, (see annexA for the CEPT example). The booklet introduced three supplementary services: Call Barring, Call Waiting and Call Forwarding; and used these to present the basic range of commands for the subjects to learn. These were: CEPT GSM Re

7、gister and Activate *SC*SI# *SCSI# same Register Only *SC*SI*O# *SC*SI#, #SC# different Activate *SC# *SC# same Interrogate *#SC# *#SC# same Deactivate #SC# #SC# same Erasure #SC*l# #SC# different Switching Order (WO so different Where: SC is Service Code, a two or three digit code (e.g. 21,43, etc.

8、) SI is Supplementary Information (e.g. a telephone number) SO is Switching Order Number (e.g. 1, 2, 3, etc.) R is Register Recall The two booklets, one for CEPT and one for GSM, were identical in style and content for all aspects except the necessary command sequences. The booklets presented each s

9、upplementary service in turn and the commands required within a service in a set pattern: Description of the service, What to do, Why to do it. The sequence of the commands slightly varied from service to service but remained logical, .e. Switch Off always came after Switch On. After the description

10、, what to do and why, the booklets included a brief exercise to reinforce the command just learnt. Subjects were asked to write out the relevant service command for one or two specific scenario(s), and then to go through a complete keying sequence on the telephone provided. The telephones used, fixe

11、d and mobile, remained unconnected to the network and consequently offered no additional form of feedback as to the correctness or othennrise of the keyed command. After presenting the commands used for all three services, the booklet presented a general summary of the command syntax rules. A common

12、 opinion questionnaire was prepared asking subjects to rate their level of agreemendisagreement on a five point scale for ten statements (annex B). The statements addressed five topics: the booklet, the services in general, the commands in general, and the use of service codes (21, 33, 43, etc.) and

13、 symbols (* and # ) in particular. A negative and positive statement was presented for ETSI ETRE2bL-5 96 3400855 0141808 886 Page 10 ETR 261-5: October 1996 each of the topics. The statements sequence was structured to present simple questions about the booklet and services first, before presenting

14、the more complex statements about the service commands, etc. The negative/positive statements were separated and distributed evenly throughout the list. The instructions and question format were copied from a previously validated satisfaction inventory. A common test paper or quiz was prepared for b

15、oth code schemes, (annex C). The quiz had 18 questions covering the range of tasks possible for each service: 7 on Call Forwarding, 4 on Call Barring and 7 on Call Waiting. The questions were presented in their groups and asked subjects to record the *# and digit command sequences required for each

16、task. A list of the required Service Codes was presented on the first page of the quiz, together with the PIN required for Call Barring. Any necessary telephone numbers were also included in the question. Thus the only elements being tested were the subjects memories of the *# command sequences and

17、the command syntax. The CEPT and GSM quizzes were identical except for the context information (Company Logo) at the top of the first page, and the colour of the paper. 4.4 Context In order to help the subjects remember and separate the code schemes they were learning, each scheme was always present

18、ed within a “Context“. For example, the CEPT scheme was introduced as coming from “Star Telephones Ltd.“. The experimenter and assistant had name badges for this fictitious company, on the wall of the experimental room there were Posters declaring “Star Telephones Ltd“ and showing the “Company Logo“

19、. The name badges, poster printing, the booklet, the quiz and the telephones (BT Viscount) were all red in colour. For the GSM scheme the fictitious company was “Mobile Communications plc“, the colour was blue and the mobile telephones were black. The opinion questionnaire, being common to both code

20、 schemes, was yellow. 4.5 The Pilot Study Before conducting the main experiment, a pilot study was conducted to verify the reliability of the learning and test materials and of the experimental design. Six subjects took pari, drawn from the same population as for the main study. Two subjects were as

21、signed to each of conditions 1 and 3 and one subject each to conditions 2 and 4. The pilot experiment was conducted at Ferris Associates offices over one week, e.g. day 1 was session 1, day 3 was session 2, and day 5 was session 3. Subjects completed the training booklets, opinion questionnaire and

22、test papers necessary for each condition. They did not complete the Ravens exercise. The results from the pilot confirmed several points: - - - The materials had some errors; these were corrected before the full test set were reproduced. The methodology would appear to be acceptable to the subjects,

23、 and was successful in collecting the data required. An initial concern that the task may be too simple to obtain meaningful data, .e. the data collected would present a ceiling effect; this was resolved. There was also significant variance between subjects, from 72 % to 5 % errors, irrespective of

24、condition; and significant variance between questions, from 81 % to O % errors, irrespective of condition. The individual data from the pilot experiment was not included in the main study. 4.6 The Main Study The main experiment was conducted in one of the classrooms of a local school. For each exper

25、imental session the tables were evenly spaced about the classroom and the context material mounted on the walls. Each subject sat at a separate table. The relevant telephone and training booklet were placed on each table together with a pen or pencil. At the start of the first session for each group

26、 of subjects, they were greeted at the entrance to the building, directed to the classroom and asked to sit at any desk. Before starting, the experimenter introduced herself and her colleague and gave a brief introduction on the purposes of the study and the order of events for the experiment (annex

27、 D). At the start of the second session the room and tables were laid out as before. Groups 1 and 3 were reminded of the purposes of the study (to test the material and the commands, and not the subject) and of the order of events (training booklet, questionnaire, quiz) and of any special controls/k

28、eys necessaiy ! ETSI ETR*261-5 96 3400855 014L807 712 Page 11 ETR 261-5: October 1996 (the “Recall“ key on the telephone and the “Send“ and “End“ keys on the mobile were not obvious). Groups 2 and 4 were simply given the quiz for their particular conditiordcode scheme. The telephone/mobile was also

29、placed on the desk for reference/context. At the start of the third session the room and desks were laid out as above. Groups 1 and 3 were given their first quiz (for the code scheme they learnt in session 1) and after completion given their second quiz (for the code scheme they learnt in session 2)

30、. The telephone/mobile on the desk was also changed. The posters on the wall were not. Group 2 and 4 were again given their respective quiz. Following completion of the quizzes each group of subjects were asked to complete the Ravens Standard Progressive Matrices. This exercise was introduced using

31、the standard set of instructions provided. Finally at the end of the third session there was a short debriefing, covering the following points: - - - - Statement of thanks for the time and effort subjects had made. Clarification for the reason for the Ravens exercise (to enable the subject groups to

32、 be compared and to enable comparison with subjects used in the other studies). Invitation to question or comment on the study, and to have their individual results. Payment of the reward (two bottles of wine per subject). Following the requests made during this debriefing, a short report was made b

33、ack to all subjects providing feedback on the general results of the study; in addition a few individual subjects were given their individual Ravens scoredgrade. 5 Results 5.1 Introduction Three sets of data were obtained during the experiment: 1) 2) 3) Error scores from each of the quizzes complete

34、d in each experimental session. Ratings from each of the opinion questionnaires for each subject on each code scheme learnt. Ravens SPM scores for each subject. The error scores used were raw error scores obtained simply by marking the subjects answers in the relevant quiz as right or wrong. The err

35、or score for any particular quiz was the sum total of wrong answers. To have the answer recorded as correct, the subject had to record the complete command sequence correctly. There were three questions, 1, 6 and 16, to which the subject could get a correct result with a different answer from the ta

36、rget answer. Within the analysis these pseudo-errors were counted as correct. These were: Target answer Pseudo-error Question 1 *21# *21*566788# Question 6 *21 *O1 04969566788# Question 16 O or RO “Ignore“ or “Do nothing“ *#21#, to check then if necessary *21 *O1 04969566788# NOTE: The analysis of e

37、rror types used a slightly different approach, see subclause 3.5 This data has been used to address five questions: 1) Were the subject groups homogeneous? 2) What performance and preference differences were there, if any, between the CEPT and GSM schemes? 3) Was there any effect, positive or negati

38、ve, from learning both schemes? 4) 5) What type of errors were subjects making? What level of usability / learnability was achievable with the current code schemes? Each question is considered separately. ETSI ETR*EbL-5 9b 3400855 OIJIJIJO 434 Page 12 ETR 261-5: October 1996 5.2 Were the subject gro

39、ups homogeneous? The basic characteristics for the four groups of subjects were quoted above (see subclause 4.2: Subjects). The main data collected for comparing the groups was the Ravens SPM. Scores for each subject were converted to a Ravens Grade depending on the subjects age, from tables compile

40、d for the UK population. The range of grades is I, II+, II, Ill+, III-, IV, IV-, and V, and these relate to 95+, 90-95, 75-90, 50-75, 25-50, 10-25, 10-5 and 5- percentile of the population respectively. The mean grade for each group of subjects was Ill+. If each subjects grade was converted to a ran

41、king, 1 to 8 respectively, any differences between the groups could be tested by the Kruskal-Wallis non-parametric one way analysis of variance. A one-tailed probability of 0,617 confirmed that the groups did not differ significantly in respect of their Ravens scores. Therefore, for the purposes of

42、the experiment, they could be considered homogeneous. 5.3 What performance and preference differences were there, if any, between the CEPT and GSM schemes? The comparison of subject performance and preference between the two code schemes was made possible by the degree of homogeneity within the expe

43、rimental groups. The comparison could be viewed at many levels, including: - - - Comparison of errors between Groups 2 and 4, and between Groups 1 and 3; Comparison of errors between all four groups; Comparison of Opinion Ratings between the four groups. 5.3.1 Comparison of errors between Group 2 (C

44、EPT) and Group 4 (GSM) The raw error scores for Group 2 can be directly compared with those for Group 4 by using the mean and standard deviation for each experimental session. Figure 1 shows this data for the two groups, comparing CEPT vs. GSM over the three consecutive weeks of the experiment. It d

45、emonstrates the broad spread of errors recorded in each group, and the similarity between the means. The main differences are that the GSM figures for week 1 are much lower than for any other session, also that the GSM sd (approx. 14) is generally narrower than the CEPT sd (approx. * 6). 18 16 14 12

46、 ?! 8 10 i8 w 6 4 2 O 7 T T i T I l I , I I I , I g2-cept g2-cept 92-cept g6gsm g4-gsm g4-gsm 62 CEPT weeks 1,2 and Group 2 data is more spread than most of the other groups. Statistically a parametric analysis of variance is not possible across all four groups. However it is possible to construct a

47、 table of F-tests comparing each experimental session with each other session. There is, however, a danger that this analysis has the possibility of increasing type 1 errors (.e. rejecting the null hypothesis when it is true). This can be balanced to some degree by increasing the level of significan

48、ce required; this carries a risk of type 2 errors (.e. accepting the null hypothesis when it is false). Table 4 shows the results of the series of tests. Table 4: Cross-comparison of each experimental session, F-tests Key: - = Non-significant J JJ = Significant 0,Ol = Significant 0,05 - 0,Ol Clearly

49、 there is a consistent pattern: the data from Group 4 week 1 is consistently significantly different to all other groups except Group 1 week 3 CEPT scores. This is unlikely to have happened by chance even with the increased possibility of type 1 errors. However, what the table also shows is that there is no consistent pattern of differences between CEP1 and GSM, or between the sessions in weeks 1,2 and 3. 5.3.4 Comparison of Opinion Ratings If there is no evidence of performance differences between the two code schemes, the question arises: is there a difference in

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 标准规范 > 国际标准 > 其他

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1