Thursday, February 21, 2019

Human Factors in Aviation Essay

A large number of flight strokings occur mostly out-of-pocket to lack of efficient batch of the surrounding environment. Traditional visionary frames assert on synthetic vision or specifically vision of the vivacious environment devoid of mist, fog and rough early(a) abnormalities. Real scenarios require the competency to provide reliable vision overcoming natural hindrances. Humans learnt the art of fleeting when they abandoned the idea of flapping of wings. Similarly, the latest developments of enhanced vision systems switch sidestepped the existing traditional vision systems to ensure flight safety.In young years, Controlled Flight into Terrain (CFID) has posed a evidential risk in two civilian and military aviation. One of the aviations worst accident occurred in Tenerife, when two Boeing 747s collided as one ventcraft was attempting to take finish off while the other was to land. The risk of CFID can be greatly minify with the concern of a suite of Radar and c ollision avoidance equipment unremarkably termed as Enhanced Vision systems (EVS). Rationale One of the primary ca substance abuses for much path accidents is reduced profile.One solution to this limitation lies in the use of infr bed sensing in aviation public setations. All objects on farming emit infr bed radiation and their emissions and features can be detected by subject matter of total darkness as well as intervening mist, rain, haze, dummy and other scenarios, when the objects atomic number 18 invisible to the gracious eye (Kerr, 2004). The first EVS system was targeted for production in 2001 as standard equipment on Gulf set of flow GVSP aircraft. The system was developed in part by Kolesman Inc under the locomotiveering science license from Advanced Technologies, Inc.utilization of EVS addressed critical aras like CIFT avoidance, familiar safety enhancements during address, landing and take off, improved detection of trees, power lines and other obsta cles, improved visibility in brown out conditions, improved visibility in haze and rain, identify rugged and sloping terrain and detect racetrack incursions. Enhanced Vision systems Enhanced visibility system is an electronic means to provide a display of the forward external scene topographic anatomy through the use of infrared imaging sensing elements.They are a faction of near term designs and unyielding term designs. Near term designs deliver sensor imagery with super-imposed flight symbology on a Head up display (HUD) and may include such enhancements as racetrack outlines, other display argumentations like obstacles, taxiways and flight corridors. Long term designs include sleep together replacement of the out-the window scene with a combination of electro optical and sensory(prenominal) information. unseeable Sensors EVS uses Infrared (IR) sensors that detect and measure the levels of infrared radiation emitted unendingly by all objects.An objects radiation level is a function of its temperature with warmer objects emitting more radiation. The infrared sensor measures these emission levels which are then processed to produce a thermal image of the sensors forward field of mint. EVS IR sensors operate in the Infrared spectrum (Kerr, 2004). The diverse types of spectrum are Long wave IR, Medium wave IR and Low wave IR. Two variants of this technology are currently in aircraft use. A iodin sensor unit operating in the hanker wave, maximum weather penetration band has significant far subtile capability. unretentive wave sensors have the ability to enhance the acquisition of runway lighting. A twofold sensor variant composed of short and grand wave bands use for both(prenominal) light and weather penetration fuses both sensor images for a full spectrum view. count on sensors operating in long wave Infrared spectrum are Cyro-cooled. Models of EVS One of the commonly used EVS systems is EVS 2000. The operation of the model EVS 2000 dual im age sensor is given in calculate 1. Long stray Infrared sensor provides best weather penetration, ambient background and terrain features.Similarly, the Short Wave Sensor provides best detection of lighting, runway outline and obstacle lights. The signal processor combines the images of both the sensors to display a fused image picturizing the current environment (Kerr, Luk, Hammerstrom, and Misha, 2003). (Source Kerr et al, 2003) Boeing Enhanced Vision System Boeings EVS enhances situational awareness by providing electronic and real time vision to the pilots. It provides information at low level, night time and moderate to lowering weather trading operations during all phases of flight.It has a series of imaging sensors, navigational terrain entropybase with a virtual pathway for approach during landings, an EVS image processor and a wide field of view, C-through helmet mounted display integrated with a calculate tracker. It likewise consists of a synthetic vision system ac companying the EVS to present a computer generated image of the out-the window view in areas that are not covered by the imaging sensors of the EVS. The EVS image processor performs the sideline 3 functions.It compares the image scanned by the ground mapping Radar and the MMW sensor with a database to present a computer generated image of the ground terrain conditions. It is accompanied by a Global Positioning System (GPS) to provide a status map during all phases of flight. The IR imaging sensors provide a thermal image of the front line of view of the aircraft. Typical HUD symbology including altitude, air speed, pressure, etc is added without any obscuration of the underlined scene. The SV imagery provides a three dimensional view of a clear window site with reference to the stored on board database. move into 2 gives the Boeings EVS/SV integrated system. The projection of SV data should be substantiate by the EVS data so that the images register accurately. The system provid es for three grassroots views i. e. , flight to view or the normal view, the map views at different altitudes or ranges and the orbiting view or an exocentric/ownership from any orbiting location from the vehicle (Jennings, Alter, Barrow, Bernier and Guell, 2003). (Source Jennings et al, 2003) EVS Image process and Integration Association Engine burn downThis is a neural net inspired self organizing associating memory approach that can be implemented in FPGA based boards of moderate cost. It constitutes a very efficient implementation of best match association at high real time video rates. It is highly robust in the face of noisy and obscured image inputs. This means of image representation emulates the human visual pathway. A preprocessor performs the feature extraction of edges as well as potentially higher levels of abstraction in order to generate a large, sparse and random binary vector for each image frame.The features are created by looking for 0 crossings after filtering with a laplacian of guassian filter and thereby finding edges. Each edge image is then thresholded by winning the K strongest features setting those to 1 and all others to 0. For multiple images, the feature vectors are strung together to create a composite vector. The operations are performed over a range of multi resolution hyper pixels including those for 3-D images. FPGA provides a end up solution by offering the necessary memory bandwidth, significant symmetricalness and low precision tolerance.Figure 3 provides an illustration of an association engine operation (Kerr et al, 2003). Fig 3 Association Engine Operation (Source Kerr et al, 2003) DSP admittance One approach to perform multi sensor image enhancement and coalescence is the Retinex algorithm evolved at the NASA Langley research center. Digital signal processors from Texas instruments have been used to successfully implement a real-time version of Retinex. C6711, C6713 and DM642 are some of the commercial digital si gnal processors (DSP) used for image treat.Image processing which is a subset of digital signal processing enables coalescency of images from various sensors to aid in efficient navigation. Figure 4 EVS Image Processing (Source Hines et al, 2005) Image processing architecture and functions of EVS, Long Wave Infrared (LWIR) and Short Wave Infrared (SWIR) processing can be done simultaneously. The multi spectral data streams are registered to remove field of view and spatial resolution differences betwixt the cameras and to correct inaccuracies.Registration of Long Wave IR data to the Short Wave IR is performed by selecting SWIR as the base line and applying affinal transform to the LWIR imagery. LaRC patented Retinex algorithm is used to enhance the information satiate of the grabd imagery particularly during poor visibility conditions. The Retinex can also be used as a fusion engine since the algorithm performs some symmetrically processing on multi-spectral data and applies m ultiple scaling operations on each spectral band.The fused video stream contains more information than the individual spectral bands and provides the pilot a single return which can be interpreted easily. Figure 4 illustrates the various processing stages in fusing a multi spectral image (Hines et al, 2005). Design Tradeoffs LWIR based single image system is no panacea for fog, exclusively reduces hardware requirements. It is also a low cost solution with lower resolution. An image fusion system provides active penetration of fog and better resolution but comes at a higher cost.Increasing the bandwidth provides better size and angulate resolution and satisfactory atmospheric transmission but costs high. radical diffraction physics limits the true angular resolution but can be overcome by providing sufficient over sampling. Sensitivity vs. update rate and physical size vs. resolution have traditionally been issues with passive cameras. Fortunately, dual mode sensors overcome these trade offs (Kerr et all, 2003). A successful image capture of landing scenario is given in figure 5. Figure 5.EVS view Vs. Pilots view (source Yerex, 2006) Human Factors Controlling the aircraft during the entire period of flight is the sole function of the pilot. The pilot seeks guidance from the co-pilot, control tower and inbuilt EVS to successfully booster cable the aircraft. The pilot controls the aircraft based on a representation of the world displayed in the cockpit given by the inbuilt systems and may not see the developed out-the-window visual scene. Visual information is presented but may not differently be visible.Some of the information may be lost repayable to limitations of resolution, field of view or spectral sensitivities. Therefore, with EVS, the world is not viewed today but as a representation through sensors and computerized databases. More importantly, the infixed data for pilotage should be available on the display. Though EVS systems gives a represen tation of the exact view of the flight environment, its accuracy plays a significant role in flight safety. Thus human factor are vital for flight control.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.