NASA develops PPA system to up the safety and accuracy of civil and research aircraft

The PPA system will help keep the C-20A Gulfstream III flying level so the UAVSAR radar pod can scan geoseismic hot spots. (Source: US Army)


The Wide Area Augmentation System (WAAS) is just now finally entering into civil aviation navigation in the United States. WAAS provides a GPS based means for aircraft to maintain a flight path by issuing level correction vectors. The end result is that the plane flies on a prescribed level path -- either from a previous flight or a computer generated path -- and follows the path to an accuracy of 30 feet.

Not one to rest on their laurels, NASA is keeping the ball rolling developing an even better system, dubbed the Platform Precision Autopilot (PPA). One significant advantage of PPA over WAAS is that due to its usage of GPS satellites and traditional techniques WAAS can only operate with 75 degrees of latitude in the northern and southern hemispheres. For PPA, which NASA plans to use in research planes which travel over Greenland and the Arctic, NASA also uses GPS but it boosts the range by relaying real-time GPS correction-vectors along Iridium’s satellite phone network to allow for navigation anywhere on the global.

NASA makes significant gains in accuracy between PPA and WAAS. WAAS's accuracy of 30 feet has been beefed up to 15 feet with PPA, a two-fold improvement. NASA hopes to become even more accurate, and is shooting for an accuracy of a few millimeters.

The final step after grabbing the more accurate GPS data is to combine it with 40 Hz input data from the aircraft's laser gyro-driven Inertial Navigation Unit (INU). Combining these signals the aircraft's onboard computer outputs positional and guidance information. This is used to autopilot the plane, but the output is displayed in traditional instrument landing system (ILS) form. Pilots will be able to read and understand it, and take corrective actions if necessary in case of system malfunction or failure. By implementing ILS, the system can become FAA-certified, paving the way for its eventual adoption on commercial aircraft.

The system was developed at NASA's Dryden Flight Research Center in Edwards, CA, which worked in conjunction with NASA's Jet Propulsion Laboratory (JPL) in Pasadena, CA. The system is designed to be utilized for NASA's Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR), a radar system designed at NASA's JPL under the guidance of NASA engineer Scott Hensley. The UAVSAR is a radar system which broadcasts microwaves in the 1.2 GHz range from an L-Band aperture.

NASA intends to use the UAVSAR for precision mapping of terrain, particularly with unmanned vehicles to map and monitor sites of extreme geologic activity. The UAVSAR is very flexible and can electronically adjusts its signal, allowing it to be mounted on a wide variety of vehicles, but it requires a system like PPA to maintain a steady enough altitude for it to get good images.

The UAVSAR will be mounted aboard NASA's C-20A Gulfstream III, which will be used as a test of PPA's accuracy and whether it operates sufficiently for the UAVSAR system's readings. NASA plans to log 140 hours of test flights before August 2008. Since the Gulfstream III operates outside civilian air space it will not need a permit to use the UAV which takes 90-days due to a somewhat archaic processing system. The test platform will allow NASA to instantly map hot zones of geologic activity. Satellite SAR systems currently exist, but they only flyby a location with 24 to 45 days, so being in the right place at the right time for short-term events is unlikely.

NASA continues to lead the way in international aviation and its PPA and UAVSAR systems are no exception. The PPA is especially promising to not only allow cutting edge research flights, but also promises to evolve and America's next generation of civilian aircraft safer.

Read More......

The National Institute of Science and Technology has set the requirements for its Cryptographic Hash Algorithm Contest

The National Institute of Science and Technology (NIST) recently announced a competition to create a new hash algorithm. Hashes are algorithms that convert blocks of data into a short fingerprint to use in message authentication, digital signatures, and other security applications.

The competition comes as advances in algorithm analysis make the current SHA-1 and SHA-2 family standards more vulnerable. NIST plans to have the new hash algorithm, which will be known as Secure Hash Algorithm-3 (SHA-3) augment the standards presently specified in the Federal Information Processing Standard (FIPS) 180-2, Secure Hash Standard. Federal civilian computers are required to use these standards, and many in the private sector adopt them as well.

In particular, the SHA-1 family has been seriously attacked in recent years. Due to the success of recent attacks, the NIST changed its policy for federal agencies, recommending in March of 2006 that it cease use of SHA-1 as soon as possible and move to SHA-2. The move to SHA-2 is required of all agencies as of 2010, with some exceptions for minor use such as message authentication codes, key derivation functions and random number generators.

NIST's goal is to provide greater security and efficiency for applications using cryptographic hash algorithms. A tentative timeline (PDF) was presented at the Second Cryptographic Hash Workshop held in August of 2006. The timeline was adjusted to take into consideration such practical things as other workshops to minimize travel for interested parties and the FIPS 180-2 reviews scheduled for 2007 and 2012.

A draft set of requirements for acceptability, submission and evaluation criteria were published in January of 2007 and after a three month open comment period, were revised. The actual requirements for the competition (PDF) were published to the federal register on November 2, 2007.

The entire process for the SHA-3 competition is similar to that of the past Advanced Encryption Standard competition that was held by NIST in years past.

FIPS 180-2 specifies five cryptographic hash algorithms: SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512. Superseding FIPS 180-1 in August of 2002, FIPS 180-2 is already five years old, and with advances in cryptography and computing power, it's hard to be surprised that those algorithms have come under heavy attack.

Source:Daily Tech

Read More......
Tuesday, November 27, 2007

IBM Lands Atop Supercomputer List


IBM leads the TOP500 for the fourth consecutive year

IBM is once again number one for the 30th installment of the TOP500 supercomputer list. The newly upgraded BlueGene/L System -- a joint venture between IBM and the Department of Energy's Lawrence Livermore National Laboratory -- took top honors with a Linpack benchmark score of 478.2 TeraFLOPS.

The BlueGene/L has held the top spot since November 2004 and shows no signs of relinquishing its crown any time soon. The 478.2 TFLOPS performance of BlueGene/L marked a significant improvement over its pre-upgrade performance six months ago of “only” 280.6 TFLOPS.

The BlueGene/L which took top honors features 104 racks (64 pre-upgrade). Each rack contains 1,024 nodes with 2,048 IBM Power processors.

The next closest supercomputer, a BlueGene/P headed by Forschungszentrum Juelich, was a distant second with 167.3 TFLOPS. Rounding out the top five was an SGI Altix ICE 8200 based at the New Mexico Computing Applications Center (126.9 TFLOPS), an HP Cluster Platform 3000 BL460c run by India's Computational Research Laboratories (117.9 TFLOPS) and another HP Cluster Platform 3000 BL460c run by the Swedish government (102.8 TFLOPS).

IBM led all competitors on the TOP500 list with 232 systems (45 percent representation). Four of the IBM systems were in the Top 10, while 38 were in the Top 100.

Of the 500 supercomputer systems that made the TOP500, 354 used Intel processors. AMD processors pulled in with a second place showing of 78 systems, while IBM's Power processors were featured in 61 systems.

The BlueGene/L's significant gains in computing power over a short period of time lead many (including IBM) to speculate that the petaflop barrier will be crossed in 2008. “Petaflop computers promise exponential breakthroughs in science and engineering by providing predictive and highly detailed simulations,” said IBM in a press release. “Earthquake simulations, for example, could show building-by-building movements of entire regions along the San Andreas fault, improving future designs of earthquake-resistant structures.”

The massive amounts of computing power provided by these supercomputer systems are channeled by a diverse field of institutions and companies.

"This sort of looms over what they are doing," said Herb Schultz, marketing manager for IBM's Deep Computing division. "You have financial institutions running these huge financial modeling applications and you have oil and gas companies looking for resources in obscure places."

IBM expects that its new supercomputer systems to be introduced next year will find a home in a variety of fields including weather forecasting, energy exploration, and auto and aerospace engineering.

IBM is also putting the finishing touches on its “RoadRunner” supercomputer which will feature Power processor cores backed by Sony’s Cell Broadband Engine which has gained an enormous amount of fame in the past year. The petaflop machine will be delivered to the U.S. Department of Energy’s Los Alamos National Laboratory in the latter half of 2008.

Source:Daily Tech

Read More......
Thursday, November 1, 2007

GMail 2.0 Coming Soon


GMail 2.0 is coming soon to your friendly local internet.

Google announced some news at its Analysts Day, which may excite some -- it will be releasing a completely rewritten, optimized and improved version of GMail. The version's goal is reportedly to raise customer satisfaction to 70 percent.

Such a high satisfaction rate, particularly for a free email service would be very impressive. Many internet services, including the internet service itself enjoy woefully low satisfaction and general customer antipathy.

GMail has been driven by aging Java-script which will now be brought up to speed. The new version dubbed "GMail 2.0" has two main goals: faster service and better contact management.

Initial testers reported the test version felt noticeably faster and more responsive, particularly in contacts management. They report a new contacts screen, as well as that the chat can now not be hidden (at least in the trial version).

Another improvements is that contact pictures can be transferred directly from Google's Picasa web albums, all server-side, to reduce bandwidth and processing expenses on the user side.

Google recently made headlines when it switched to the superior IMAP protocol, an unexpected move for a free internet service, as reported at DailyTech. IMAP support should be almost completely rolled out to GMail users by now.

Expect to see GMail 2.0 rolled out sometime this year or early next year. The update is recognizable by a "newer version" option appearing in the upper right hand links in the mail window. Be sure to comment if you received this update, as Google has not announced a hard date for the rollout.

Google currently features over 4 GB of storage per account, which makes it one of the most generous providers in terms of storage space.

Google has been breaking ground with many new initiatives, including its Google News, the Google Lunar Challenge and the Unity Project -- a trans-Atlantic cableline. It did recently get its heart broken by Facebook, when it got rejected for Microsoft. However, it is unconcerned as it has a deal with Myspace.com and several other networks, and hoards of loyal GMail users, who will soon be enjoying an improved version of their favorite email service.

Read More......


New sockets, chipsets and architecture en route from Intel before 2009

Nehalem will likely be the most aggressive processor architecture in Intel's portfolio since the original Pentium. With the launch of the Core architecture, the company announced its tick-tock strategy: design new architecture, then shrink the process node. Rinse and repeat.

Tick-tock is alive and well as Intel's corporate roadmap reveals additional details about its desktop iteration of 45nm quad-core Nehalem, dubbed Bloomfield.

Nehalem will be fundamentally different from the Core architecture for no less than two reasons. The company will move the memory controller from the core logic on the motherboard to the processor die. This tactic has been a cornerstone for the AMD K8 architecture since 2003.

In addition, Nehalem will also feature a new bus interconnect, currently dubbed Quick Path Interconnect. This new interconnect behaves very similar to HyperTransport, currently used on all AMD platforms since K8.

A new bus and memory controller means a new socket design. Existing motherboards are not compatible with Nehalem-based processors. The new desktop socket, labeled LGA1366, will completely replace the existing LGA775 interconnect.

The company will replace the X38 and yet to be announced X48 desktop chipsets with the Tylersburg chipset family and ICH10 southbridge for these first LGA1366 motherboards.

Corporate guidance also suggests the company will likely ditch all DDR2 support in favor of DDR3, at least on the high end platforms. All Bloomfield processors will feature support three DDR3 channels.

However, not everything is known about Nehalem just yet. Corporate guidance suggests Bloomfield will feature a new revision of Hyper-Threading. Although each Bloomfield features four physical cores, the processor will dynamically allocate additional threads -- Bloomfield computers will detect eight logical cores.

Bloomfield will feature less cache than Intel's high-end 45nm Penryn offerings slated for release between now and Q4 2008. However, unlike the 12MB L2 cache featured on Penryn, the 8MB L3 cache on all Nehalem offerings can be shared between all four on-die cores.

Intel's highest-end Bloomfield processors will feature a 130W thermal envelope. Extreme Edition Penryn processors, the first on the 45nm node, have a thermal envelope that tops out around 136W. Intel's Q9550 processor (2.8 GHz, 45nm quad-core) sports a 95W TDP.

Paul Otellini, Intel CEO, boldly announced that Nehalem as "taped out" at the Intel Developer Forum last September. The tape out designates when a design team has moved from the design to working samples.

At both Intel and AMD, the tape out comes approximately one year before the actual launch date. True to tick-tock, Bloomfield's debut will also come one year after the 45nm node launch, or Penryn.

Read More......