Pada 29 Ogos 2012, saya coin istilah "Meta-State Engineering" dan terangkan penggunaan Neuro-Linguistic Programming (NLP) dan Neuro-Semantics (NS) dalam rekabentuk kejuruteraan kepada rakan-rakan dalam Kursus Neuro-Semantics di Shah Alam, dan saya juga berpendapat ianya perlu untuk melengkapkan kajian Artificial Intelligence seperti "Expert System" dan "Fuzzy Logics" dalam blog saya. Pagi ini saya membaca artikel ini.
=================
NASA Launches with NLP! An Interview with Dr. Alenka Brown Vanhoozer, Director for Advanced Cognitive Technologies
By Al Wadleigh
I met Alenka at the NLP Comprehensive
Post Master Advanced Language Patterns workshop in Winter Park,
Colorado, this past summer. Alenka is taking NLP to the stars –
literally. Her work involves using NLP to create what she calls
Adaptive Human-System Integration Technologies for various applications: aerospace (commercial, military, NASA), brain mapping,
behavior profiling, modelling of learning and decision-making
strategies and more. Here is what she had to say about her work.
AW: So, tell me about your organization and what you are doing.
AB: I'm the Director of the Center for
Advanced Cognitive Technologies at the Department of Energy's Oak Ridge
Y12 facility in Tennessee. Previous to this position, I was a research
engineer for Argonne National Laboratory where my work with NLP started.
For the past nine years I have conducted research involving
representational systems and submodalities in the design of adaptive
systems for human-system interaction.
The research involved
understanding how internal representational systems can be used to
develop adaptive systems that match how an individual processes
information. Other areas of study have involved modelling beliefs,
values, and motivation at the neuro-physiological level and preferred
representational systems, chiefly visual, auditory and kinesthetic
(V-A-K). In my research to date, I found the modeling approach of NLP to
be an untapped resource for understanding human behavior in the fields
of artificial intelligence, cyber security, adaptive systems,
identifying inherent system design errors, extraction of information for
systems design, virtual mirroring, modelling successful individuals,
and much, much more. And now, I'm integrating this with neural brain
research for correlating representational systems with neural pathways
and associating the results with semiotics.
AW: What are some of the projects you are working on?
AB: One of the projects I'm currently
conducting is for NASA based on the representational systems – V-A-K.
I'm looking at previously filmed videos of eye scanning movements of
different subject pilots, and correlating this to written transcripts of
verbal conversations of the subject pilots during various scenarios.
I'm mapping how each of the subject pilots process information during
abnormal and normal conditions. Basically, determining primary
representational systems, and the construction of internal strategies
when performing specific tasks. From the results, I will determine how
to enhance current training programs for pilots and suggest new designs
for flight decks. Over the length of the project results from this study
may be used to enhance certain areas of the astronaut training program.
The project also involves training human factors engineers at NASA
Langley on how to use NLP techniques in their research studies.
I'm conducting work with a colleague for
the Department of Defense in designing adaptive systems for battle
visualization and planning. Other projects within the near future will
involve NLP methodology and knowledge preservation, advanced learning
systems, behavior profiling and adaptive cognitive systems for
biometrics, which involves correlating representational systems with
neural pathways and semiotics.
AW: What would be an example of how you use this information?
AB: New commercial pilots being hired
today are no longer primarily ex-military pilots in their late 30's,
early 40's with 20 years experience. Because less ex-military pilots are
retiring, commercial airlines are starting to hire pilots that come
from a general aviation background with a minimum of at least a 1,000
hours of flight time. The ex-military and general aviation pilots have
different flying philosophies, different types of training and standards
that they set for themselves, or that have been set for them. So we
will probably start to see different types of errors in the new pilots
compared to their counterparts – ex-military or older commercial pilots.
Planes today are pretty much automated
and can fly themselves. It has been noticed in simulator training that
the newer pilots, who are non ex-military or senior commercial pilots,
are much more comfortable with the automated systems until there is an
abnormal condition. Once an abnormal condition is activated, these new
pilots start to hesitate and second-guess themselves. It has been
suggested that this is due to the fact that these new pilots rely more
heavily on the computers and less on their ability of manual control.
The opposite seems more prevalent with the senior pilots of 15 or 20
years experience. An abnormal condition is activated and these senior
pilots automatically take over manual control and feel their way through
the situation. The interesting part is that these same senior pilots
are not as comfortable with the highly automated systems as the newer
pilots.
So we're looking at possibly new types
of error problems with the new pilots being hired today. I would use the
NLP models and techniques to understand how the philosophies are
different at the neuro-physiological level, establish the V-A-K
strategies being used between the two types of pilots during various
scenarios, and compare results for the enhancement of simulator training
and design. This type of enhancement could then be transitioned into
redesign of flight decks for commercial and NASA applications.
AW: In a sense you are designing the
user interfaces of flight decks to the unique characteristics of the
pilots employing them depending upon whether their modality preference
is kinesthetic, auditory, or visual. Have you worked on other user
interfaces?
AB: Yes. I have done similar work for
the nuclear engineering field for the Department of Energy. I modelled
nuclear reactor operators at two facility sites to determine how they
processed information. I found that the majority of the nuclear
operators observed were primarily kinesthetically oriented with the
second largest group being auditory. The studies compared the
representational systems to preferred characteristics or attributes that
each of the operators liked or disliked on visual displays. From these
results, we were able to develop a list of screen display
characteristics for the auditories, kinesthetics, and visuals. It was
noticed that the kinesthetic's and the auditorie's preferences
overlapped in several areas like background colors – they both dislike
black. The kinesthetics remarked that black backgrounds made them
uncomfortable and gave them a negative feeling. I was also able to
determine the amount of colors, the type of text, fonts, character
density on the screen, and more preferred by each group (V-A-K). They
demonstrated preferences about whether to use icons, symbols, or text,
how images, icons and layout of screen displays should be developed, and
so on. From these studies, I am able to tell if a visual, an auditory,
or a kinesthetic individual designed the system, and what sort of
inherent errors we could expect.
AW: That's really fascinating. So just
by the screen layout you can know the modality preference of the person
who designed it. This has some profound implications for designers as
well as users!
AB: Yes it does. Another example was
several years ago at what was known then as the Chem Plant run by
Westinghouse in Idaho. I went and saw a series of screen displays newly
developed by Westinghouse for their in-house operators. I mentioned to
the engineer that, "Either you had operators design these screens or
you're still in the process of designing them." He replied that "the
operators did it – the engineers didn't develop the screen displays for
the operators." When the engineer got to one particular screen and I
said, "Well, either the operators couldn't see inside this room that's
being depicted here, or you had an engineer do this one," he said, "The
operators couldn't see inside the room. So they went and got the piping
and instrumentation drawing so they had some clue as to what was in the
room." The graphical interfaces or screen displays that were shown to me
were very simplistic. The screen only displayed what the operators felt
was necessary to see. There were limited saturated earth tone colors on
the screen, a legend, and very little text. That indicated to me the
operators designing the screens were primarily kinesthetic or auditorily
oriented. And sure enough, the operators who developed the screens were
primarily kinesthetic-auditory (sequentially oriented).
So I told the engineer, "You know these
screens are pretty good." The engineer said, "We gave the operators a
system and they came in and designed these screens." And I said, "Well,
then they're designing it for the way THEY see it, they hear it, or they
feel it. These are good screens. This is the way screens need to be
designed – to the operator's needs. They're the ones working on it 40 to
60 hours a week, it's not the engineer who's going to be sitting here."
AW: What I'm getting from this is that
the inherent errors in user interfaces has more to do with the
incongruence between the screen and how the user processes information.
If the user is kinesthetic and the designer is visual the user will have
more difficulty with the interface.
AB: Right. So another portion of the
research and applications we've been doing is also identifying the
designers' way of processing information and seeing where there's
conflict. So, typically, if you have a designer whose primary reference
system is visual, he's going to design the system from a visual
perspective. Even though the designer will show the information that's
required, the information will be displayed in a format that is
comfortable to the designer's way of processing information meaning it's
going to have a certain amount of colors, probably too much text, too
much information packed in, and a background color that is unsuitable to
the kinesthetic and auditory oriented processors. Everything is going
to look the way this designer likes it to look. It's not going to look
the way the auditory or kinesthetic operator would prefer for it to
look. So, here's where we're starting to notice inherent errors being
designed into systems, and that's what we mean by inherent system design
errors. These are not inherent errors to the engineer designing the
system but to the people using the system. The system does not match the
way the operators or users are processing information. So the system is
not adapting to them. And that's an area that we're working in. We're
designing adaptive systems.
No comments:
Post a Comment