Online Training Is Clearly The Bee’s-Knees!

oltJohn Davis, manager of training and development with Nortel supply management, says, Prior to using the intranet our training was done primarily through the use of inter-departmental and inter-divisional internal mailing systems. The communications sent out would list the available training courses and ask the appropriate department head to fill out and return a form requesting training at his/her site location. Now, most of the communication is on our internal supply management Web site. Nothing seems to get ‘lost’ nor ‘forgotten’ as it often did in the past manual system.

An interactive training system over an intranet offers several benefits. Travel costs for geographically dispersed employees and costs associated with instructors, seminars, and conferences are non- existent. Says Davis, By having this training available at the buyer’s desktop, we can help minimize costs of travel, lodging, and the time a buyer is away from their daily activities.

Also, by reducing the amount of time employees are away from the office, there is less distraction to their daily work and they can be more productive. The information is an instantaneous reference, and employees are able to review the information at their own pace. This type of training is time efficient for our employees located at outlying areas, says Robertson.

Sources also say that intranet-based training can replace certain classroom courses. According to Mike Oswalt, global procurement and contracts, Fluor Daniel, Inc. Fluor Daniel implemented their computer- based training (CBT) in addition to video training for desktop systems in January 1998, as a way to make the system more efficient. However, training for certain purchasing courses and workshops is still instructor-oriented.

In general, according to statistics quoted in Training Magazine, computer-based training has proven to have a 60% faster learning curve, 25%-60% higher content retention, 56% greater learning gains, 50%-60% better consistency of learning, 38%-70% faster training comprehension, and the cost is 50%-70% less than instructor-lead training. Oswalt agrees, For purchasing specifically, there are great opportunities for savings in long-distance courses. These courses target specialized topics and bring the information to the desktop rather than bringing the learner to the class.

The training, Davis says, is most valuable for purchasing personnel because of the user’s ability to log on, utilize, understand, and convert a host of different training subject areas to their personal application. For example, the online training in code of business conduct and ethics in procurement has vastly improved from the old method. Also, with the interactive training offered through the intranet, says Robertson, a user will be able to track their individual and departmental progress.

On the other hand…

But several sources also point out that certain courses which are taught in an instructor-lead classroom environment need the interaction to convey the material successfully. Oswalt reports that some concepts may be lost in comparison. A good example would be a course in risk management. Learning depends on the mix of participants and their willingness to share project experience as course material. The interaction of the participants is key to the success of the course.

Another difficulty encountered thus far in setting up an intranet training program is getting the shared mind-set from everyone in the company. Everyone needs to understand that changing the training procedure to spending more time at a workstation is as beneficial and important as going to a class, says Oswalt.

Expense is another issue. Developing company-specific computer-based training material is expensive, says Oswalt. It is not only the initial cost, however. The courses require continuous review and update. Says Davis, Our courses are upgraded constantly. We have a full staff in supply management training. As a result, we are constantly monitoring training content to ensure that it not only meets the demands of a high-tech industry, but that it keeps up with new innovations in training venues as well.

Another disadvantage to a computer-based training system: Not all workstations provide the most conducive learning environment. Distractions such as the phone ringing and visitors may pose some problems. Effective training requires an environment free from distraction, says Oswalt.

Steps to follow

wbEmployees at Nortel access the company’s Web site by logging in through a Web browser, which allows each employee free roaming access to all of the company’s internal Web sites. We conduct training, career opportunity announcements, company news, and a myriad of other communication services in this manner, says Davis. Some training over the intranet mimics the work the average supply management employee will need on the job. For example, says Davis, there is a logistics Web site that provides the user with the most economical method of transportation, the current cost, and the service expectancy. The site enables the end user to discover a host of valuable tools to do his/her job faster and more efficiently.

Nevada Power Company’s training over an intranet also is designed to mimic real-life situations in a test environment. The improvement of the system will depend on feedback from surveys. Says Robertson, Since we are still in the design phase, it is difficult to detect difficulties with the program. But creating a consistent format and finding the most efficient way to put the information on the intranet is a challenge.

Fluor Daniel’s training is based on enrollment. The courses are downloaded to the workstation via an intranet. Says Oswalt, Computer- based training, CBT and National Education Training Group, NETg are both systematic in instruction. NETg, for instance, has a pretest to examine the user’s knowledge and from those results, tailors a custom course.

Although each company has its own method of exploring intranet options and does so at its own pace, it’s clear that the use of intranet training will continue to expand.

Document Management Software Hits Home For Companies

dmcA shoot-out of document management solutions found there’s no wading in enterprise-level document management–you either dip your toe in or dive headfirst. “The only organization that has more paper to scan than insurance is government,” said Erich Berman, a PC Week Corporate Partner and advanced technology consultant at Northwestern Mutual Life Insurance Co., in Milwaukee.

One of the greatest generators of paper at NML is the “quiet company’s” 7,500-plus field agent staff, which has requested a document management solution. “Our sales agents deal with an enormous amount of paperwork, and helping them take all that paper off their desks and file cabinets and into a document management system is an important direction for us,” Berman said.

NML participated in ourShoot-Out to gain perspective on what it can realistically expect when deploying a document management system. NML corporate wants a solution that can be initiated at the agent level but can also scale up and out to other parts of the organization.

The company hopes to reduce the high expenditure of time and money that its paper-based system incurs. Although NML has document management systems in several departments, most notably the legal department, a corporatewide system is still in the planning stage.

Agents for change

Like those in most insurance companies, NML’s sales agents operate as independent business agents and must purchase the hardware and software required to work within the parameters set by NML and the insurance industry. However, the agents rely heavily on technical support and training from NML’s Field Office Development and Link department, which operates as a technology liaison between the company and its sales force. Because each sales agent is an independent operator, FODL can recommend, but not mandate, the technology each sales office should use.

Berman, who helps NML’s field sales force keep up with and integrate the latest technology, is all too cognizant of the difficulty in balancing the advantages of new technology with the realities of doing business. “Because cost is an important factor for each of our agents, we’d like to recommend a modestly priced entry-level package with the ability to scale across our whole organization,” he said.

Rather than have each sales office deploy a mix of document management systems that may (or, more likely, may not) work together, NML would like to eventually roll out an enterprisewide system that would be available to all of its field sales force as well as to all headquarters. This presents major hurdles in terms of cost and capability and reflects challenges faced by any company dealing with business partners along the supply chain.

“Our agents are independent businesspeople and want a cost-effective document management solution they can deploy in their offices,” Berman said. “Doing away with paper altogether is an eventual goal, but, due to regulatory and legal considerations, we must hang on to some paper documents.”

Multifaceted judging panel

mjpIn addition to Our Labs’ analysts and NML’s Berman, the Shoot-Out judging panel included NML IS managers, NML field agents, NML corporate staff members and a representative from Wisconsin’s Bureau of Tax and Accounting.

Evaluated during the Shoot-Out were enterprise-class document management systems from Eastman Software Inc., FileNet Corp. and Lotus Development Corp.. All of the systems we looked at were capable and will enable a company to better manage documents and workflow, but, unfortunately, none would allow the insurance company to hit an entry-level price point that was in line with the field agents’ budgets.

During the Shoot-Out, the judges also considered products at the low end of the document management scale. Products such as Caere Corp.’s PageKeeper Pro and ScanSoft Inc.’s Pagis Pro cost less than $500 but are suitable only for one, two or three users on a peer-to-peer LAN.

These packages would be suitable for the short-term needs of the agents, who were impressed with the capabilities of PageKeeper Pro during a hands-on demonstration at the Shoot-Out. However, such packages would not allow for growth beyond the field office level.

SpeedScan’s turnkey application is in place at one of NML’s field offices, and it has demonstrated capabilities beyond those of the low-end document management packages that we looked at. However, NML has determined that the SpeedScan application is neither cost-effective nor scalable enough for an enterprisewide deployment.

Both Documentum Inc. and PC Docs Group International Inc., vendors of high- end document management systems, were invited to participate in the our Shoot-Out but declined. SpeedScan also declined to participate.

EnFishing For Greatness

imsEnfish Technology Inc.’s Enfish Tracker Pro information management software is a dream come true for a company’s information pack rats, giving them an easy way to index and find the wide variety of data they’ve stashed away on their PC hard drives.

The product tracks everything from the contents of e-mail and desktop applications to the latest information on user-specified Web sites, although the Web search feature was unimpressive in our Labs’ tests. Even without the Web searches, the Windows product is worth its $80 price.

Other single-user indexing tools, such as Compaq Computer Corp.’s free AltaVista Discovery, do much the same thing but aren’t as customizable as Enfish. Enfish’s principal drawback is that its indexing engine takes up a lot of memory–the company recommends it be used on PCs with 64MB of RAM.

As useful as Enfish is for individuals inundated with information, a workgroup version that could index data on a server and share those indexes with a group of users would be more valuable for businesses. Enfish officials said they plan to add workgroup features to an update slated to ship in March; the single-user version we tested shipped in late October.

We gave Enfish–the name stands for Enter, Find and Share–the formidable task of indexing our entire hard drive, looking at data stored in Microsoft Corp.’s Word and Excel applications, and at e-mail messages stored in Lotus Development Corp.’s Notes and Microsoft’s Outlook, as well as searching specified Web sites for new materials of interest.

Enfish indexes Corel Corp., Lotus and Microsoft desktop applications and tracks messages in a variety of e-mail packages, including Qualcomm Inc.’s Eudora, the e-mail component in Netscape Communications Corp.’s Communicator and America Online’s Mail. The company plans to add support for other packages, including Web-based e-mail, next year.

In tests, Enfish didn’t search Notes mail messages as thoroughly as other mail systems, however–it only searched the in-box, not folders in which we had stored mail messages. Enfish officials said this problem has occurred at other Notes sites, and they are working to fix it.

Works in the background

Enfish indexes a PC’s hard drive in the background whenever the indexing engine detects that the PC hasn’t been used for 5 minutes. Although users can set wider intervals for scanning a particular application or file directory for new material, the indexing engine does take a considerable amount of memory (6MB), and frequent indexing can tie up processor time and slow machine performance.

The index files themselves can also grow quite large. The more RAM on the PC, the better the program will perform.

Enfish doesn’t require users to change how they organize their applications or where they store data. Rather than moving files into folders, the program leaves data where it is found, establishes a link and displays it in a streamlined user interface.

We were able to quickly create search terms, which Enfish calls trackers, by entering a keyword for a topic along with the names of individuals associated with that keyword. We could refine a search by applying a set of Boolean filters to focus on just a specific combination of phrases. For example, we could have one tracker keep tabs on all instances of documents and e-mail on our hard drive that contained the phrase “groupware” and add another tracker that narrowed the search to documents mentioning both Notes and Microsoft Exchange.

Getting the picture

gtpEnfish comes with a variety of viewers that let users view some documents without opening up the original application–this is particularly useful for looking at GIF, JPEG and BMP image files.

Enfish also provides a handy way to keep track of images and other files that are impossible to index. To organize JPEG files downloaded from a digital camera, we wrote a cover note describing each photo and attached it to the file. Enfish then indexed the notes, which let us quickly locate the photos.

Notes can also be used without attached files just to jot down ideas. If a note contains a search term found in one of a user’s trackers, it is automatically added to that tracker.

Enfish’s limited Web searching tool lets users construct a search and run it against Yahoo, Excite and other Web-based search engines. We could also program Enfish’s search engine to scan Web sites regularly, look for terms stored in information trackers and notify us when new information arrived.

But the Web search tool doesn’t download what it finds, so it won’t replace sophisticated offline browsing tools such as DataViz Inc.’s Web Buddy or the improving offline features found in Microsoft’s and Netscape’s Web browsers.

Enfish Tracker Pro let us define trackers and displays results on the right.

The Executive Summary: Enfish Tracker Pro

Enfish’s Enfish Tracker Pro will be a welcome single-user desktop tool for managing a growing stack of online information in locations ranging from e- mail to Web sites. The software regularly indexes a user’s hard drive and performs Web searches for user-specified terms, making it easy to locate critical data no matter where it is stored.

Pros: Gathers information from e-mail, desktop applications and Web sites into one easy-to-search index; is easy to customize; includes viewers for image files; can index images via attached cover notes, as well as notes that aren’t attached to anything.

Cons: Index engine requires large amount of RAM; has limited Web search tools; could not index Lotus Notes mail messages as thoroughly as other mail messages.

System Sensors – Are They On A Good Pace?

mpIntroducing a standard microprocessor to the mix was the next evolutionary step, from which came the first smart sensors (Fig. 1c). This provided a more powerful system that could utilize software for testing, calibrating, and linearizing the sensing system. Analog-to-digital conversion was required on both sides of the processor, however, and that added cost. Moreover, microprocessors themselves make the sensing systems more expensive than many applications can afford.

Don Pullen, applications engineer with Texas Instruments Inc.’s Linear Products Div., Dallas, sees the most recent evolutionary step as one in which the processor and its ADC and DAC cohorts are replaced with a signal processor dedicated to sensing operations (Fig. 1d). Raising the level of integration and replacing the standard microprocessor with an intelligent device specifically designed for the application gives sensor designers the advantages of programmability, low power consumption, and superior linear performance.

These sensor-signal processors can condition the analog signal from the transducer using a combination of hardware and software. Sensor linearization, for example, often includes piece-wise linear approximation of the sensor’s output characteristics by selecting a number of segments.

A good example is the nonlinear-response thermistor, used to measure temperature by monitoring the change in its resistance. The first step in linearizing the output is to introduce a resistor in parallel with the thermistor. This procedure results in a good approximation of a linear response. But even better accuracy can be achieved by using a simple algorithm that approximates the curve by a series of straight lines. Software-implemented multipoint approximation can even be adjusted to modify the characteristics of individual sensors to which the processor is attached, says Pullen.

Smart sensors can be self-calibrating, or at the very least, make calibration easier. During final test, multiple sensing units can be multiplexed with a host computer that exercises each sensor individually and downloads the appropriate calibration constants to the device. Moreover, field calibration of the sensor can be simplified by adding a keyboard interface or LCD interface to the sensor. Then a handheld computer can calibrate the sensor. Software also provides a means of running diagnostics on both the sensor and the transmitter in the sensing system. Autodiagnostics range from watchdog timers to periodic self-testing routines during operation, says Pullen.


An outstanding example of a next-generation smart-sensing system on one chip can be seen in technology developed by Monolithic Sensors Inc., Rolling Meadows, Ill., to guage low pressures. Made entirely of silicon, it combines an air-pressure sensor, CMOS analog sensor circuitry, and CMOS digital interface circuitry on one device.

MSI creates the sensor with the aid of micromachining technology. The device is essentially a capacitor consisting of a pair of diaphragms. The capacitance value changes as one diaphragm plate flexes under air pressure (Fig. 2). Corrugations machined in the silicon allow the diaphragm to flex. Its movement is both linear and quite large, says Warren Graber, an MSI applications engineer. Capacitance can change by as much as 25% during operation, which makes for an easy-to-detect and accurate reading. In addition, linearity is one of the sensor’s primary advantages.

Being fabricated completely of silicon means that all components have the same coefficient of expansion, which curbs thermal-sensitivity problems. The sensor can also be built with mechanical stops for an over-pressure capability of 100 times its rated pressure. Perhaps most important, however, is that since it’s made of silicon, all of the associated circuitry can be put on one chip.

On the analog side, the variable capacitor is used in a frequency oscillator that varies with pressure. In addition, a reference frequency is created by a fixed capacitor that drives another oscillator. The frequency information is converted to digital samples that are proportional to the ratio of the two capacitors. A digital CMOS interface provides output as two 8-bit words. In typical smart-sensor fashion, the digital circuits allow calibration data to be stored in EPROM. The sensor can be calibrated in the factory and needs no trimming or periodic recalibration. Total worst-case error for the system is less than 5% and digital compensation can improve accuracy to within 0.5%. The device is packaged in a 24-pin plastic DIP that has two airhose fittings so the sensor can operate with a differential air-pressure input.

Smart sensing as implemented in MSI’s technology is but one part of the new outlook for sensors. New materials that provide superior performance to conventional solutions also are under investigation in laboratories around the world.

Matshushita Electric Industrial Co. Ltd.’s laboratories, located in Tokyo, Japan, are currently tackling pressure-sensing technology. Using an iron-base amorphous magnetic-alloy ribbon, researchers have developed an oil-pressure sensor that measures up to 20 megaPascals (MPa), with full-scale accuracy coming within less than 2%. The sensor operates at temperatures between -30 and 100 [degrees] C, which makes it acceptable for suspension and brake systems in automobiles.

Amorphous magnetic alloys have many attractive properties for harsh-environment sensor applications, including high mechanical strength, corrosion resistance and, most important, a variable magnetic permeability that’s caused by stress. The research team’s cylindrical sensor is built of titanium and has two hollow chambers at either end that are unconnected. One chamber is filled with air and serves as a reference; the other is filled with oil. A strip of iron-base amorphous alloy is annealed to the cylinder. Two coils encircle the reference and detecting chambers, respectively, to detect the change in permeability of the amorphous alloy when pressure is applied. To measure an output, the sensor is used in a simple electronic circuit that has two inductors (the coils), two compensation resistors, an ADC, and a DAC. The circuit is driven by a 32-kHz voltage source.

Although the sensor is still a few years from commercial use, the researchers have been able to model its response mathematically. This means it’s a likely candidate for integration into a smart-sensor system because the mathematical model can be used for in-situ calibration and self test.


Significant progress is also being made in another new field: optical sensors. Research interest is particularly strong for applications such as pollution monitoring, in which the concentrations of the material to be identified are as low as parts per billion. Electronic sensors can seldom monitor at this level of precision.

Researchers at the Georgia Institute of Technology, for example, have developed a generic device called an integrated-optic interferometer. This planar waveguide consists of an approximately 1-[in.sup.2]. glass substrate (Fig. 3). It’s coated with a thin film that has a slightly higher index of refraction than glass. The thin film is made of a material that reacts in a detectable but reversible way with the contaminant the device is supposed to sense. The basic operating model calls for the thin film’s index of refraction to vary proportionally with the amount of contaminant it absorbs.

In a generic interferometer, changes in refraction are measured by having a tiny laser emit light that’s split into two beams. One beam propagates through the glass-based substrate to establish a reference point. The other beam propagates through both the substrate and the thin film. If the film’s refraction index changes, the beam will undergo a phase shift relative to the reference. The phase shift is generally proportional to the concentration of the contaminant. Researchers can develop calibration curves to measure the amount of contaminant down to parts per billion.

Because the sensor is so small and its operation relatively simple, it can be easily taken to the location of the suspected contaminant. A single sensing device can also be used to detect more than one contaminant. This is accomplished by applying several types of thin films to the glass substrate. Each film is reactive to a different contaminant.

Reversibility of the chemical reaction is an important characteristic for the sensor, but that’s largely a matter of finding the right materials for the thin film. One of the first working integrated-optic interferometer prototypes, for example, was developed to measure the presence of ammonia. A film of dodecylanilinium salt dissolved in ethanol was spin-applied tot the substrate. Early devices could detect ammonia in the air down to a few parts per million. With full integration of the laser light source and a planar waveguide, the research team was able to reduce sensor noise. Sensitivity dropped to the 100 parts per billion range.

The device doesn’t react at all to water vapor, which is important because it will be used primarily in the field. Moreover, this one research project that doesn’t have to endure an aggressive cost-reduction phase to reach commercial viabilitty: The device’s components cost less than $100. The technology has been licensed to Photonic Systems Inc., a “technology incubator” company on the Georgia Tech campus. Work is underway on a integrated-optic interferometer that can detect multiple contaminants.


bsrMedical applications have also shown a strong affinity toward advanced sensor technology using light sources. In a collaboration between Sandia National Laboratory and the University of New Mexico, both of Albuquerque, N.M., infrared light is used in a non-invasive glucose sensor for diabetes patients. Although glucose was chosen for the first system, the technology, which is based on infrared spectroscopy and statistical techniques, is readily applicable to other measurements.

Near-infrared light has the capacity to penetrate biological tissue, and this sensor operates on that principle. Besides, being non-invasive, the procedure allows for continuous blood-glucose monitoring, highly desirable for diabetics undergoing surgery or in childbirth. Another scenario made possible by continuous glucose monitoring using light is the development of a monitor that could be worn with a programmable insulin pump. This would, in effect, create an artificial pancreas. Small pumps that can meter pumped insulin into the bloodstream have been researched at Sandia.

In the prototype monitor, wavelengths of light passing through the patient’s finger are absorbed by components in the blood. The spectral characteristics of the remaining light are recorded by a spectrometer and evaluated by versions of algorithms originally developed at Sandia to analyze the aging process of nuclear-weapons material. This relatively new branch of analytical chemistry is called chemometrics. Two statistical methods–Partial Least Squares and Principal Component Regression–were tested, as were three instrument configurations. The researchers concluded that near-infrared testing is viable and are seeking commercial partners through a company, Rio Grande Medical Technologies Inc., Albuquerque, NM.

Quite a different approach to the same problem–glucose sensing–is being taken by researchers at Fujitsu Ltd’s research laboratories in Japan. The key component in this group’s biosensor research is a miniature Clark oxygen electrode that’s micromachined in silicon. Its operation is based on the fact that the sensor detects the changes in oxygen concentration that result from glucose oxidation, which is caused by a glucose oxidase enzyme situated in the sensor. The amount of oxygen is almost linearly proportional to the current, measured in nanoamperes, that is generated by the Clark oxygen electrode.

Two types of electrodes were fabricated. In the first, the electrode resides on a 2-by-15-by-400-mm undoped silicon substrate. The electrolyte is contained in two etched 0.2-by-0.7-mm V grooves. The anode is formed in the grooves and the cathodes occupy the area between the grooves. Both anode and cathode consist of a 400-nm-thick gold film with a 40-nm chromium adhesive layer. Several electrolytes were tested, including potassium-chloride suspended in polyvinylpyrrolidone (PVP) and poly vinyl-4-ethylpyridinium bromide (PVEP). A gaspermeable membrane resides on top of the electrolyte and steam is used to activate the electrolyte by infusing water into the electrolyte. The glucose-oxidizing enzyme adheres to the membrane.

The prototype sensor exhibited a response time of one minute and showed good linearity for glucose concentrations between 56 micromoles and 1.1 millimoles at 38 [degrees]C and a pH of 7.0. It had a lifetime of about 10 days before the enzyme matrix detached from the gas-permeable membrane. Using the same miniature Clark electrode structure, biosensors were also fabricated to test for the presence of carbon-dioxide, L-lysine, and hjypoxanthine, a chemical that can be monitored to determine the freshness of fish. The Fujitsu researchers found several problems with the first electrode structure, however, including electrochemical crosstalk between electrodes. As a result, they’re working on a new electrode structure that incorporates several architectural changes.

At the cutting edge of sensor technology, researchers are beginning to design circuits that can recognize images and sounds. Work at the California Institute of Technology, Pasadena, for example, attempts to model, to a limited degree, the ear’s cochlea in both function and structure. The ultimate goal of the research is to create devices that interpret and understand sound, pinpoint the direction from which a sound is coming, and perhaps even provide a cochlear prosthesis.


At Cal Tech, researchers John Lazarro and Carver Mead developed a chip architecture that computes all outputs in real time using analog continuous-time processing. As one might imagine, the project is conceptually complex. One key difference between biological ears and the chip is that the analog mechanical processing done by the ear is accomplished electronically in the chip. For example, sound energy in the eardrum is coupled into a mechanical traveling-wave structure called the basilar membrane, which converts time-domain information into spatially encoded information. In the chip, this is approximated by a cascade of second-order circuits with exponentially increasing time constants, says Lazzaro, who is now at the University of California at Berkeley.

In addition to the basilar membrane, the chip also models outer hair cells, inner hair cells, and spiralganglion neurons (Fig. 4). The outer-hair-cell circuits control local damping of the basilar-membrane circuit. Taps along the basilar membrane connect to circuit models of the inner-hair cells. Outputs from the inner-hair cells connect to circuits that model spiral-ganglion neurons. These neuron circuits form the primary output of the chip that models the auditory response.

In both physiological systems and the IC, the response to sound is a series of voltage spikes, which correspond roughly to biological neurons firing. After the chip was fabricated, it was subjected to two tests. The first compared with the chip’s response to an 1840-Hz tone with that of a cat. The second consisted of 2000 30-dB clicks. In both cases, the chip’s response was qualitatively the same as the response from the auditory fiber of a cat. The tests were relatively simple ones, however, and more modeling of physiological systems is needed before commercial versions can be implemented. The chip can encode 25 dB of dynamic range, for example, compared to 125 dB for human ears.

The importance of the research results so far is two-fold. First, the chip does roughly model the response of the biological system it mimics. Second, the computations are done in real time by using an s architectural model similar to the physical cochlea. The model includes autocorrelations in time and cross-correlations between auditory fibers.

Research into silicon retinas and position-sensing devices is further along than auditory systems. Work is well underway in several universities including Cal Tech, the University of California at Berkeley, and the University of Pennsylvania, to name just three (ELECTRONIC DESIGN, May 3, p. 33).

Recent investigative research at the University of Pennsylvania focused on 2D motion detection, a capability that would have use in robotics, surveillance, and other systems. By combining two models that seel to describe motion detection in biological systems, the University of Pennsylvania researchers were able to design an analog chip that could detect motion as fast as 6 meters/s in two dimensions. It also responds well over a range of six orders of magnitude in illumination.

The chip was a 5-by-5 array of photoreceptors, each 100 by 100 [mu]m. Although the output current of the sensors is basically linear, it’s converted to a logarithmic response so that the sensors can respond to a large input range without saturating. Motion sensing requires edge detection. This is implemented by a circuit that functionally models a retina to some degree. The inputs of every pixel are averaged in a complex operator called Difference og Gaussians, which discards the local average input and shows only disontinuities. As in the silicon cochlea, many of the computations are executed in the analog domain.

Motion detection is a difficult problem, but easier ones, such as character reading, are already being solved in commercial applications that generally include neural networks. The Synaptics Inc, I1000 chip, for example, combines an imager with a single-font character-recognition system to read the computer-coded characters on band cheeks. “The first commercial applications to use these silicon eye and ear sensors,” says Lazzaro, “will solve simple tasks for lower-power, high-volume, size-sensitive applications.”

Organizing Yourself – Are You Effective?

oyne“Why is the world a failure?” asks Hilmer, speaking in a phone interview from Sydney, Australia, where he teaches at the Australian Graduate School of Management. “Because organizations are hierarchical machines, not using the thinking power of their employees?” The problem with that argument, he says, is that it doesn’t square with what you see if you take the trouble to observe what’s actually happening. “There are plenty of companies that seem to be working just fine. There are plenty of companies that have figured out ways to tap employee creativity within existing structures and hierarchies.”

But such arguments are all but drowned out these days by the crescendoing drumroll that heralds the unveiling of the reinvented, 21st century business enterprise. Just what will this radically new, post-modern, post-reengineering entity look like? Will it be the giant corporate nation-state (British Telecom/MCI, Boeing/McDonnell Douglas)? Will it be the loosely connected, virtual organization – the “adhocracy” of alliances and far-flung outsourcing contracts?

The answer is not yet clear. But whatever big new idea next emerges to capture the fancy of business thinkers, it’s hard to imagine a notion that will be more challenging than the one Wheatley and others are proposing: the biological organization; a complex, self-adaptive system; chaos theory as the next management pardigm.



“The world seeks organization,” writes Wheatley in her book. “It does not need us humans to organize it. . . . Organization wants to happen.” She instructs us to look to nature for examples of the world’s self-organizing handiwork. With no master flight plan to guide them, birds fly in flocks. Termites in Australia and Africa build towers soaring 30 feet into the air. These engineering marvels, laced with intricate tunnels and graced with arches, are the largest structures on earth in proportion to the size of their builders.

And yet, all this happens not from a detailed blueprint but as the improbable result of a curious work process, observes Wheatley. “With antennae waving, [termites] bump up against one another, notice what’s going on, and respond. Acting locally to accomplish what seems to be next, they build a complex structure that can last for centuries. Without engineers, their arches meet in the middle.”

Not to take anything away from the marvel of termite mounds, but convincing business leaders that we should run our airlines and petrochemical plants based on the termite model is likely to prove a tough sell.


When questioned by TRAINING about that potential difficulty, Wheatley characterized her book as not so much a call to action but rather “a meditative call to awareness. Awareness that a very different world view is available to us.” She acknowledges that for many people the shift to this new world view will not come easily.


As a metaphor, Wheatley’s hymn to self-organizing is not without appeal. There is, after all, evidence to show that workers are capable of self-organizing, and that the few companies that grant workers the power to make decisions affecting their jobs often prosper as a result.

But while Wheatley invokes self-organization as a metaphor, others are starting to speak of it as a new model – the next idea after reengineering. And that’s where the new world view becomes challenging. As Wheatley herself observes, “Life seeks order in a disorderly way…mess upon mess until something workable emerges.”


It is tempting to brand the self-organizing system as the next misguided management fad – the idea of the flattened hierarchy carried to the point of absurdity. (You want to order 40 boxes of photocopier paper? Please call back in 25 years. By then our new sales organization should have emerged.)


The problem with dismissing the ideas bubbling up around self-organization, however, is that, unlike some of the more faddish notions to come down the management pipeline in recent years, these ideas do have some scientific underpinnings.

Explorations into the world of chaos theory, and its spin-off field of complexity theory, are challenging some of science’s long-held assumptions about how the world works, just as quantum physics and Heisenberg’s uncertainty principle knocked Newtonian physics into a cocked hat earlier in this century. The question is, Will complexity theory provide us with useful models and metaphors for understanding the world, including the world of work? Or will it lead us down the path of strangeness to an intellectual cul-de-sac, where we can only shake our heads at the unpredictability of the world?



For decades most scientists have believed that the forces shaping the world and the universe are random, that there is no invisible, guiding hand organizing the show. But if that is so, then why, some scientists ask, do galaxies form pinwheels and certain marine creatures turn into chambered nautiluses?


The work of molecular biologist Stuart Kauffman and others at the Santa Fe Institute (SFI) has recently shed light on nature’s counterintuitive and previously invisible tendency to organize itself. Several years ago, Kauffman constructed a network of 200 lightbulbs in which each bulb was linked to two others using Boolean logic (e.g., bulb number 17 might be instructed to go on if bulb number 23 went off, and to turn itself off if bulb number 64 went on). The number of on-off configurations in such an arrangement is an astronomical [10.sup.30]. Given those numbers, chaos should reign; there should be no discernible pattern to the lighting arrangements. But in fact, after about 14 iterations, the lightbulb network settled into a pattern of just half a dozen on-off combinations.


“We have always known that simple physical systems exhibit spontaneous order: an oil droplet in water forms a sphere; snowflakes exhibit sixfold symmetry,” explains Kauffman in his book At Home in the Universe (Oxford University Press, 1995). “What is new is that the range of spontaneous order is enormously greater than we have supposed. Profound order is being discovered in large, complex and apparently random systems. I believe that this emergent order underlies not only the origin of life itself, but much of the order seen in organisms today.”

A nonliving arrangement of lightbulbs may seem far removed from a social system of human workers. But another early simulation provides a clue to complexity theory’s potential relevance. In his book Complexity (Simon & Schuster, 1993), M. Mitchell Waldrop describes how Craig Reynolds, a researcher at the nuclear physics lab in Los Alamos, NM, caused a stir in 1987 by simulating bird-flocking behavior on a computer screen. The birdlike objects, called “boids,” flew together in a flock and swerved as a unit to avoid obstacles. When forced to break apart to avoid an obstacle, they soon regrouped again into a new formation.

Yet nothing about their programming told the objects to display this collective behavior. There was no master flight plan that guided the motions of the flock. Each object was programmed individually with just three rules: Fly in the direction of the other objects; try to match velocity with neighboring boids; and avoid bumping into things. The essence of complexity theory is that simple agents obeying simple rules can interact to create elaborate and unexpected behaviors.


Taking their cue from these laboratory experiments and simulations, a few companies have found practical use for complexity theory, devising local, rule-based solutions for problems that in the past would have been addressed by a solution imposed from above.

General Motors Corp.’s truck plant in Fort Wayne, IN, formerly used a master scheduling program to determine which of 10 different paint booths painted which truck bodies as the trucks rolled off the assembly line. As long as all parts of the system moved along in sync, things worked well; but if any one piece of the system slowed down, things fell apart in a hurry.

In the early ’90s, GM switched to a complexity-based computer system in which a computer at each paint booth acts as an independent agent, “bidding” on each new paint job based on its ability to take on additional work. The calculations include the cost of each new job; for instance, can this job be done without a color changeover? This self-organizing system quickly evolved a pattern for painting trucks that reduced color changeovers at the paint booths by 50 percent and now saves GM more than 81 million a year.

Using “genetic algorithm” software that employs complexity concepts, Deere & Co., the farm-equipment maker, has developed an optimum scheduling program for the manufacture of customized seed planters, which can be assembled in a mind-boggling array of 1.6 million different configurations.


Successes such as these have drawn companies to the Santa Fe Institute by the dozens in recent years, each looking for ways to profitably apply complexity theory to its business. And these pilgrims are not confining their quests to operational applications. Some are hoping complexity theory will provide a new understanding of the development of social structures, including business organizations. Coopers & Lybrand, McKinsey & Co., and Ernst & Young have all sent people to SFI to learn more about its research, and, they hope, to find ways to roll complex adaptive systems theory into their consulting practices.



haThe idea that vastly complex organisms – be they cells, galaxies, economies or business organizations – can arise from a few agents interacting according to simple rules certainly does hold the promise of what Wheatley terms “a simpler way of being in this world.” Who needs 10 million lines of computer code outlining to the nth detail every step of a manufacturing or distribution chain when three or four simple rules suffice? Who needs a complicated plan for reorganizing the company when the company can organize itself? Who needs a vice president of strategic planning? And for that matter, do we really need a CEO?

But does complexity theory really lend itself to organizing human agents in a company? Michael McMaster, who works with a U.K.-based consulting firm called Knowledge Based Development Ltd., believes the answer is yes. McMaster has used complexity principles to design the work of a cross-functional team of pipefitters and welders building an off-shore oil platform – though design is probably not the right word. McMaster distilled project tasks down to just four basic rules and then set the workers, or agents, free to create their own work processes.

But while this demonstration of self-organizing theory applied to human agents may seem an important evolutionary step for complexity theorists, it doesn’t exactly break new ground in terms of describing how work gets done. The fact that groups of workers will, if given the chance, find ways to accomplish a task is hardly a revelation. This has been the driving force behind the creation of self-directed teams for years.

Even the term serf-organization is not unprecedented. Six years ago, researchers into workplace learning began to explore the ways in which workers organize themselves into “communities of practice” to accomplish jobs, and how these communities self-organize in ways that are often invisible to supervisors and managers (see “Communities of Practice,” TRAINING, February).

In a new field such as complexity theory, however, every modest success at applying the idea to the workplace quickly inflames the imagination with the hope of bigger things to come. Echoing the youthful prognostication of young Werner Heisenberg 75 years before them, the disciples of self-organization proclaim they are on the path of scientific discoveries that will shed light on the whole range of human intelligence. “Approaches based on the principles of complex adaptive systems theory will completely change the way we organize, compete, think of industries and do business,” declares McMaster.

But precisely how complexity theory will shape the work world is not clear. If you set the fight number of agents in motion, each agent following the fight set of three or four simple rules, complexity theory predicts that these agents will eventually, like boids, organize into something large, complex and unexpected. But what, exactly? The high-performance, global business organization of the 21st century? Or a wonderfully complex mechanism for achieving bankruptcy and ruin?

And in what sort of environment will these agents be set loose to do their self-organizing? Must they be unencumbered by any trace of hierarchy, or management structure, including a CEO and board of directors? Or will there still need to be someone, somewhere, calling the shots – at least some of the time?

These are legitimate questions for which budding complexity theorists have no answers yet, admits Christopher Meyer, who heads up Ernst & Young’s Center for Business Innovation in Boston. But Meyer believes an application of complexity theory to organizational design will one day be found. Last year he mailed 15,000 copies of At Home in the Universe to Ernst & Young clients and drew dozens of curious companies to a three-day symposium called “Embracing Complexity.”

Meyer acknowledges that a lot of ground remains to be covered before complexity theory will be palatable to the traditionalist in the business world. “There’s no solid theory yet to explain how aggregations of agents in something like a business organization will interact to create the emergent properties of a new organization,” he says. Computer simulations such as boids, while tantalizing, do not prompt one to rush straight out and demolish one’s reporting structure. “Today’s simulations don’t have what you’d call a hierarchy of objectives,” he says. “Teams operating under sets of three or four rules may do their tasks well, but what do you need to make those tasks come together in a larger sense?”


Meyer sees hope, however, and perhaps even a three-point mad map for bringing complexity theory to fruition as a management tool. The first point, which has already been reached, he says, is to apply complexity theory to operational problems. “Complexity-based models at General Motors and John Deere have proven that they solve operational problems better than linear techniques,” says Meyer.

The next phase, which researchers are starting to close in on, is to develop simulations that have what Meyer calls a “feel of real life to them.” The final step will be reached when these real-life simulations are so realistic that they can be used to solve organizational problems.

That third step, Meyer acknowledges, is still years in the future. But the second step, the real-life simulation, is being taken today, he believes. One such simulation can be found in a computer-generated world called Sugarscape.

In their book Growing Artificial Societies (MIT Press, 1996), researchers Joshua Epstein and Robert Axtell describe how they use agent-based computer modeling to create a landscape – essentially a large, two-dimensional grid – called Sugarscape. Each square on the grid is assigned a certain amount of sugar and the ability to replenish its sugar at varying rates. Some squares replenish quickly, others more slowly.

Set in motion in this Sugarscape are the computerized actors in the drama – agents that like to eat sugar. These agents are “born” into the landscape with the ability to see, a metabolism, and a set of genetic attributes. They are set in motion by a simple set of rules: “Look around as far as your vision permits, find the spot with the most sugar, go there and eat the sugar.” The agents metabolize sugar as they move from place to place. If their movements don’t turn up enough sugar to sustain their metabolic rate, they die. Sugarscape is a cruel world.

Now, if this simulation doesn’t exactly convey a “real-life” feel for what occurs in your workplace, it may be because the description here has been monstrously simplified. Or then again, it may be because Sugarscape is a crude first step at using complexity theory to model human behavior. Nonetheless, the authors argue that the actions of their Sugarscape agents reveal uncanny parallels to such human activities as “trade, migration, group formation, combat, interaction with an environment, transmission of culture, propagation of disease, and population dynamics.”

Interestingly enough, the folks at SFI who have pioneered the hard-science research into complexity theory make no promises as to the relevance their new ideas will find in the corporate world. “We understand the attractiveness of using complexity theory as an organizational model,” says Bruce Abel, SFI’s vice president of research. “Complex adaptive systems teach us that there is no stability, things are constantly changing. To businesses being buffeted by market changes and technological changes, it seems natural that there should be some applicability.”

But it’s not an easy connection to make, confesses Abel, especially once you start looking at large organizations and the myriad sets of relationships within them. “It’s a lot harder to study a city than it is to study an anthill,” he says.

The extent to which business leaders embrace complexity theory as a self-organizing model may ultimately depend on whether they come to perceive it as a true model, or simply an interesting way to think about the world.

To embrace complexity as a model means that business leaders also must be prepared to embrace the scenario described in A Simpler Way: employees turned loose to heap mess upon mess until a new workable system emerges. Moreover, according to the full-cloth version of self-organization, there’s absolutely no way to predict in any way, shape or form what will ultimately emerge, or how long the emergence will take. Remember, complexity theory derives from chaos theory, which gives us the famously cliched analogy to describe the unknownability of the world’s infinite interdependencies: A butterfly flaps its wing in Brazil; a stock market in Tokyo crumbles.

Barring the development of some sort of model that packages the theoretical power of chaos theory into a palatable form, we are left with self-organization as what? A metaphor? One more expression of a basic idea that has been recycling through the management vocabulary for years under various names: Theory Y, quality circles, self-managing teams, empowerment?

But if self-organization ends up as nothing more than a metaphor, how is it different from teams or empowerment? What does it bring that’s new?

“It’s a deeper way of thinking about these things,” says consultant Peter Block of West Mystic, CT. “We’ve been operating on a metaphor of engineering and control, a metaphor of economic scarcity. Here’s a new idea that proposes an abundance of possibilities instead of scarcity, that asks people to imagine all these new possibilities if they just let go of their controlling behavior. I think that’s good.”

One final speculative question about self-organizing systems: Suppose self-organizing doesn’t turn out to be just the latest craze on the seminar circuit? Suppose we do find a way to leverage these new ideas into something transformational? What, then, would the emergent self-organizing entities look like?

“There are two good examples, one obvious, one maybe less so,” says Thomas Malone, a professor at MIT’s Sloan School of Management and co-director of a project there called the Initiative for Inventing the Organizations of the 21st Century.

The obvious analogy, says Malone, is the Internet, a highly decentralized set of agreements on ways to communicate with no overarching control. “It’s interesting to contemplate whether the Internet would have grown as fast or as large as it has if it had been run in a more centralized way by someone like AT&T,” says Malone. “I don’t think so.”

The Internet is often cited as an example of what a self-organizing system might look like. But in the eyes of enthusiasts, ifs more than a model; it’s also a means of getting there. In order to reap the advantages of decentralized decision-making that self-organization promises, says Malone, you have to give everyone access to all information. “Information technology is what makes self-organizing possible.”

But this analogy ignores the fact that most companies are looking for something a little less chaotic than the Internet when they set up their own corporate intranets. “Most companies find ways to overlay a set of rules, some sort of road map for how information gets shared,” observes Carla O’Dell, president of the American Productivity and Quality Center in Houston, which is preparing to release a report on the role that information technology plays in corporate knowledge management strategies. “You can’t share all the information,” says O’Dell. “Without some organizing filter, e-mail systems are brought to their knees; workers are paralyzed by information overload.”

Not that O’Dell means to dismiss the idea of self-organization. “I think it’s the most powerful and interesting idea to come along in the last 20 years,” she says. “But it’s a difficult idea to grasp right now. How do you balance the freedom of local self-organizing with the need for global perspective and some degree of control?”