Sunday, April 17, 2016

Self-Assembling Nanotubes

       Nikola Tesla conjured up all sorts of interesting experiments for his famed “Tesla Coils.” Today, however, their main use has been relegated largely to impressing visitors at science museums.
That is about to change. Researchers at Rice University have used Tesla coils to get carbon nanotubes to self-assemble into long chains, a phenomenon the scientists have dubbed “Teslaphoresis.” Controlled assembly of nanomaterials from the bottom up could be useful in applications including regenerative medicine where the nanotubes would act as nerves as well as fabricating electronic circuits without touching them.
You can see a pretty impressive video of this so-called Teslaphoresis in action below.


         In research that is described in the journal ACS Nano, the researchers were able to get the nanotubes to self-assemble into long chains because the Tesla coil generates an electric field that causes the positive and negative charges in each nanotube to oscillate. This ability to remotely affect the charges in each nanotube over such a distance is one of the surprising developments of this research.
        “Electric fields have been used to move small objects, but only over ultra-short distances,” said Paul Cherukuri, who led the research, in a press release. “With Teslaphoresis, we have the ability to massively scale up force fields to move matter remotely.” The influence of the Tesla coils on the nanotubes doesn’t just get them to self assemble, but it can also power the circuits that the nanotubes form. In one experiment, which you can see demonstrated in the video, the researchers were able to get the nanotubes to form into wires that created a circuit between two LEDs, which were powered by the Tesla coil.
         The Rice researchers have initially used carbon nanotubes in their experiments because of their abundance at the institution where the so-called HiPco process for mass producing them was first developed. However, the researchers contend that the Telephoresis process could work with a variety of nanomaterials.
        No matter the material used, the key element is the Tesla coil. The researchers envision using much stronger coils that are able to generate far more powerful directed force fields. They are also considering using several Tesla coils in unison to create far more complex self-assembling circuits than the ones they have already produced.
        Cherukuri added: “There are so many applications where one could utilize strong force fields to control the behavior of matter in both biological and artificial systems. And even more exciting is how much fundamental physics and chemistry we are discovering as we move along. This really is just the first act in an amazing story.”

434

Saturday, April 16, 2016

2016's Top Tech Cars: Chevrolet Volt

Price: US $33,995 ($26,495 after $7,500 federal tax credit)
Power plant: 1.8-L four-cylinder with dual electric propulsion motors; total 111 kW (149 hp)
Overall fuel economy: Equivalent of 2.2 L/100 km (106 mpge) on electricity; 5.6 L/100 km (42 mpg) on gasoline

       Today’s US $2-a-gallon gasoline doesn’t bode well for the plug-in Volt’s showroom sales in America. But when gas prices spike again, the Volt hybrid may end up looking like one of the smartest fuel savers on the planet—and for one-third the price of a Tesla Model S.
        This second-generation Volt is a green sweetheart to drive, and improved in nearly every way over the original: sleeker, lighter, faster, quieter, and more efficient.
        Let’s cut to the chase. In a test drive north of San Francisco, the Volt managed precisely 60 all-electric miles (97 kilometers) before the gasoline engine kicked in. That beat the official 53-mile estimate, itself a 40 percent improvement over the first-generation Volt. At that point, the Volt’s new direct-injection engine smoothly blended combustion with electric power to offer 675 km of total range—no range anxiety in this car. As it crossed the 106-mile mark on this trip, it burned exactly one gallon of gasoline—3.8 liters—along with $1.50 worth of wall electricity.
In other words, spend $3.50 to cover 106 miles. Go ahead and try to do that in a Mini Cooper. There’s not a gasoline, diesel, or conventional hybrid on the road that can match that efficiency, which equated to 2.0 L/100 km (120 mpge) over the electric portion and better than 70 mpg overall. As advertised, owners who commute fewer than 53 miles round-trip can punch into work every day without ever using a drop of gasoline. Purely coincidentally, the Volt’s battery supplies a half-gallon’s worth of gasoline energy. So if you manage 53 miles (85 km) on a charge—which we exceeded without even glancing at the helpful gauges that coach you toward efficient driving—that works out exactly to the official U.S. estimate of 106 mpge.
And you won’t sacrifice performance. The new Volt’s curb weight drops by more than 90 kilograms (200 pounds), to about 1,607 kg, and the propulsion unit alone is 45 kg lighter. The new four-cylinder engine is more powerful, runs on regular rather than premium fuel, and operates at lower rpm to reduce the drone that plagued the original.
        The dual electric motors are markedly revised, sharing no common parts with the first-gen Volt. Total horsepower remains at 149, but it now has a very hefty 401 newton meters (296 foot-pounds) of torque—21 more than before. The upshot is an 8.4-second surge to 60 mph (97 km/h). Perhaps more important, it gets to 30 mph in just 2.6 seconds, 0.7 second quicker than before—and remarkably, just two-tenths of a second slower than the 220-horsepower Volkswagen GTI that served as my “rabbit” during the driving test. One of the Volt’s coolest new features is a steering-wheel paddle that triggers regenerative braking. Grab the paddle as you’re entering turns or rolling up to stoplights and it feels like downshifting in a sports car, even as you’re saving energy. Battery mass drops by 10 kg, and there are 192 lithium-ion cells in the Volt’s T-shaped, under-floor battery pack, down from 288 before. Yet thanks to a tweak in battery chemistry, capacity is up 8 percent, to 18.4 kilowatt-hours—the secret to the Volt’s newly extended electric range. And at $26,495 after a $7,500 federal tax giveaway (er, credit) in the United States, the Volt is a bargain. The Chevy is not only two-fifths the cost of a Tesla and one-fifth the cost of a BMW i8, it’s actually less than the $33,500 price of the average new car in America—and $1,200 less than its lower-performing predecessor.
        The Volt isn’t the only Chevy that will test American appetites for electrified cars: Late this year, the Chevrolet Bolt goes into production, a pure EV hatchback that promises 200 all-electric miles, for an estimated price of around $30,000 after federal and local tax breaks.
And what will they call the Bolt’s electric successor? Well, it will have to be “Jolt,” because “Colt” has already been taken.

697

Friday, April 15, 2016

2016's Top Tech Cars: Ferrari 488 Spider

Price: US $275,000
Power plant: 3.9-L V-8 with dual turbochargers; 493 kW (611 hp)
Overall fuel economy: 13.8 L/100 km (17 mpg)

        Hustling Ferrari’s latest fantasy through Italy’s Emilia-Romagna region, we take all of 3 seconds to salute the end of an era and to hold on tight as another begins. That’s how much time it takes the glorious 488 Spider to reach 100 kilometers per hour (62 miles per hour). Closing a book on seven decades of howling, superhigh-revving, naturally aspirated engines, every new Ferrari will now be turbocharged, a hybrid, or both. And while Ferrari’s new turbos can’t hit the operatic, 9,000-rpm tenor notes of its predecessors (not yet, anyway), it’s game over in every other regard.
        Although the midmounted V-8 displaces 0.6 liters less than the departed 458 model, it pumps out a shocking 20 percent more power. And the 760 newton meters (561 foot-pounds) of torque surpasses the old 458 by fully 40 percent. The result shatters the record for power per liter for any production-car V-8 even as fuel consumption drops by 14 percent.
That unbeatable one-two punch of power and efficiency is why such stalwarts as Ford, Mercedes, and Porsche have gotten the memo: Join the turbo revolution, or die. In 8.7 seconds, or less time than a Toyota Prius takes to reach 100 km/h, the Ferrari is doing 200 km/h (124 mph).
        With warp-speed acceleration achieved, Ferrari looked to tackle another, more elusive supercar goal: making the Ferrari easy to drive. The Side Slip Control 2 system assesses a driver’s skill level in real time, applying its Formula One–bred stability and traction systems to maximize speed in any situation. Twitch a finger on the carbon-fiber steering wheel and the Ferrari reacts in 0.06 second. Flick the column-mounted paddles and the seven-speed F1 automated gearbox downshifts 40 percent faster than before and upshifts 30 percent more rapidly. The style is bellissimo, naturally, but the beauty springs from pure aero function. The signature is the new “blown spoiler,” a discreet cove atop the rear deck that funnels air to pin the Ferrari to the pavement, with no need for a rear spoiler that would add aero drag and spoil the lines.
"After slicing up the countryside like a haunch of prosciutto, we find final affirmation on a looping ascent to Forte di San Leo, a soaring promontory and medieval fortress. The sash-wearing mayor and townspeople pour out to greet our Ferraris, snapping enough photos to fill a family album." Yes, driving a 488 Spider in Italy is almost like cruising in the Popemobile. But while the pope can’t make you infallible, this car just might.

444

Sunday, April 10, 2016

Feasible Wave Energy. Finally.

        With all that to-and-fro and ebb-and-flow, the motions of the ocean offer an endless supply of renewable energy. Or they would, if engineers could figure out how to capture that power. Most prototypes for wave-energy converters have been massive and costly, or else they’re torn apart during violent storms at sea. But a Swedish company called CorPower Ocean may finally have a solution. Tests show its new buoy can produce three times as much electricity as the best rival tech, with a far more practical design.
        For starters, it’s relatively small: Whereas other devices can stretch hundreds of feet across and weigh well over a thousand tons, CorPower’s bobbing red machine is a mere 26 feet in diameter. Yet a single buoy stationed offshore can generate about 250 kilowatts of power—enough to cover the electricity needs of 200 homes. Wave-power farms could contain hundreds or even thousands of them.
        The process relies on three key components: a mooring line that holds the buoy in place and keeps it upright and stable; a device called the WaveSpring that causes the buoy to oscillate in time with incoming waves; and a gear mechanism that converts that bobbing motion into electricity with maximum efficiency. The company’s engineers have tested the wave-energy converters in tanks, and field trials are scheduled for next year.
        “CorPower could be the real winner in wave energy,” says Antonio Sarmento, an independent researcher in the field. “Their technology represents a breakthrough.” More like a sea change.


249

Saturday, April 9, 2016

2016's Top Tech Cars: Mercedes-Benz F 015 Concept

Price: Not for sale
Power plant: Hydrogen fuel cell
Overall fuel economy: N/A

        For years, automakers have rhapsodized about how our cars would become mobile offices and living spaces. And then they botched even the simple stuff, like letting you dial up the Backstreet Boys from your iPod on the car’s sound system. But fear not. The Mercedes-Benz F 015 will be the rolling roost of your dreams. You’ll just have to wait at least until 2030, when Mercedes thinks this kind of hydrogen-powered, fully autonomous vehicle will become viable.
        Bigger than an S-Class, the Benz concept looks like a Clockwork Orange hipster lounge, with its walnut-veneered floor and wall-wrapping touch and gesture displays. Mercedes sees it as a retreat that will maximize privacy or productivity in the hectic urban zones of the future. The mod theme continues with two front white-leather-clad lounge seats that swivel rearward (after all, the “driver” usually won’t need to attend to the road). The steering wheel telescopes into the dash during autonomous mode. And any passenger can take charge of vehicle functions, such as speed and which of the 360-degree views to project inside the car.
        The concept car makes clever use of optical technologies to communicate with cars and pedestrians. A forward-looking laser, for example, can beam messages onto the pavement, including a whimsical image of a zebra crossing or the words “please go ahead.” You get into the car using a smartphone app, which opens enormous clamshell-style portals for easy access to the lounge space inside. A hydrogen-fueled F (for “fuel”) cell plug-in hybrid drive system could deliver a 1,100-kilometer driving range, Mercedes says, including 200 km on battery power.
        If you care to trust Mercedes’s crystal ball, by 2030 hydrogen cars will be a common sight. Unless gasoline still costs $2 a gallon. Or if Telsa has its way.

310

Friday, April 8, 2016

IBM's Rodent Brain Chip

       IBM has created neuromorphic chips. In August of last year, the project leader Dharmendra Modha and his cognitive computing team shared their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. At a conference last year, this eclectic group of computer scientists explored the particulars of IBM’s architecture and have begun building software for the chip dubbed TrueNorth. In an interview with WIRED last year, Modha showed them one of the neuromorphic chips. "About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently." "You’re looking at a small rodent," he said. He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.
        Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.
        “What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.”
        The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.
        The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars, who declined an invitation to this month’s boot camp but has closely followed the progress of the chip.
        That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it’s years away from the market. For Modha, however, this is as it should be. As he puts it: “We’re trying to lay the foundation for significant change.”

615

Sunday, April 3, 2016

2016's Top Tech Cars: Ford GT

Price: US $400,000
Power plant: 3.5-L V-6 with dual turbochargers; 485 kW (650 hp)
Overall fuel economy: N/A

Nearly a half century ago, the Ford GT40 went to the 24 Hours of LeMans and crushed mighty Ferrari, sparking an enduring legend. Now Ford looks for a LeMans déjà vu this summer with a reborn racing GT, followed by 250 annual copies of a roughly US $400,000 scissor-doored wonder car for the street.
The GT eschews a V-8 for a downsized twin-turbo V-6 based on its Daytona-winning LMP2 race engine. Ford is promising the best power-to-weight ratio of any production car in the world, with a hand-laid carbon-fiber tub and body for this mid-engine monster.
An active suspension lets the Ford hunker down at triple-digit speeds to reduce drag, while an active air brake at the rear rises and angles as needed to boost aero downforce or slow the car into corners. The gorgeous fuselage is billionaire bait, but the bravura style is wedded to pure function. A curved pair of flying buttresses perform dual tricks: The winged roof channels direct air to the rear spoiler, and their hollow sections contain piping for the turbo intercoolers: Engine intake air is hoovered from beneath the car, compressed into the turbos, then snaked through the winglets and down again to hyperventilate the V-6. Heated air from the intercoolers flows rearward and exits through tubes in the center of the rear taillights.
It’s all executed so beautifully that we were pleased to gawp at the thing at the recent Detroit Auto Show. But I’ll be happier when Ford finally lets us drive it.

270

Saturday, April 2, 2016

2016's Top Tech Cars: Audi Autonomous RS7

Price: US $136,650
Power train: 96-kW (129-hp) AC electric motor with 1.5-L 170-kW (228-hp) three-cylinder gasoline engine
Overall fuel economy: 8.4 L/100 km (28 mpg) on gasoline; electric equivalent of 3.1 L/100 km (76 mpge)

       Audi’s autonomous cars are becoming quite the world travelers: Recall the much-ballyhooed first robotic drive from San Francisco to New York City, about a year ago. Impressive stuff, though honestly, humans can hold their own at pulling into a rest stop.

Here in Spain, the man-vs.-machine competition will be at higher speed and for higher stakes. I’m about to take on Robby, the autonomous RS7 sport sedan that’s designed to rock a racetrack at speeds that would blister Google’s cartoonish bubble car. If a human driver can’t keep up, it occurs to me, then our obsolescence draws that much closer. It’s only a matter of time before governments and automakers pry the ignition keys out of our fallible, accident-prone hands for good.

Robby is looking cool and confident in the pits at Parcmotor Castelloli, near Barcelona. And for good reason: The Audi weighs 400 kilograms (882 pounds) less than Bobby, the RS7 that holds the world speed record for autonomous cars, at 240 kilometers per hour (149 miles per hour).

I take to calling the newer car Robby the Robot, after the glass-skulled automaton from the 1956 sci-fi movie Forbidden Planet. Miklos Kiss, Audi’s head of predevelopment for driver assistance systems, introduces us to his two autonomous brainchildren. Popping Bobby’s hatch, we find it full to bursting with computer gear. Robby’s, in contrast, has plenty of room left over for luggage. There’s a single MicroAutoBox brain and power supply and two other small computers.

Incredibly, Audi’s latest differential GPS unit can fix Robby’s position to within 2 centimeters, vastly better than today’s GPS standard of roughly 1.5 meters. This price-no-object system also uses redundant cameras to triangulate and thus to confirm the car’s location. Keeping a fully autonomous vehicle safely within lanes will require zeroing in to 50 cm (around 20 inches).

Controllers adjust the engine, electric steering, transmission, and brakes, with redundant fail-safes: There’s a spare power supply and brake controllers if the first ones conk out. A 4.0-liter biturbo V-8 produces a villainous exhaust note that echoes off the dusty canyon walls.

I slide into Robby’s shotgun seat with some trepidation, thinking that the driver’s side will remain spookily unoccupied. But surprise! There’s an Audi engineer in the seat, along for the passive ride but holding a plunger connected to a cord. If something goes wrong and he lets go of the plunger’s button, the Audi will slow and halt on the track. Theoretically.

The checkered flag waves. The RS7 launches itself down the front straight and charges into the first corner, the steering wheel twirling, the ghosts fully in charge of this machine. The brake and throttle pedals aren’t moving at all because the computer commands are bypassing the old analog connections.

Before I arrived, Audi engineers had manually driven Robby around this Spanish track to measure its barriers and “geo-fence” a safe zone beyond which Robby will not go. Like a real-life slot car, the Audi locks onto its programmed track line, its path varying by only a few centimeters. Yet the RS7 also reacts in real time to conditions such as a slippery track or wearing tires, dialing back power or correcting the steering if it begins to slide off its satellite-guided path.

/image/MjczMTc2NA
Photo: Audi
Brain in a Boot: The Audi RS7’s computing hardware fills the trunk space.
Robby turns out to be a smoothy, a race-instructor type who puts up great, flowing lap times yet keeps the car utterly balanced and composed. Its dead-consistent laps vary by less than 0.3 second, including a best this day of 2:09.2.

Now, it’s my turn, and I jump aboard a production Audi RS7. Suddenly, I’m John Connor taking on Skynet, a human fighting for his increasingly pitiful and superfluous species. After one reconnaissance lap on this unfamiliar track, I turn the Audi loose. Please, Lord, don’t let me lose to a stinking machine.

I drive back into the pits and dash over to the timer. A few Audi engineers applaud, a bit grudgingly, I’m thinking. But my lap of 2:05.4 is nearly 4 seconds quicker than Robby’s. Even after hundreds of laps in recent weeks, Robby’s best is still 2 seconds behind my first trip around Castelloli. Take that, you remorseless Terminator, German accent and all.

But as my adrenaline subsides, I’m forced to concede that adrenaline is among my biological advantages. Robby may know speed, but the words “race” or “win” simply aren’t part of his vocabulary. Yet.

Where I punished the tires and pushed the limits, Robby stayed emotionless, programmed to run safe, endlessly repeatable laps. The last thing Audi needs is for a self-driving car to disintegrate against a wall or injure its passengers, setting back autonomous driving by a few years. A few tweaks of the algorithms, a few more generations, and Robby’s offspring will be chip-enabled Michael Schumachers. (Please, Audi, name your next car Ricky Bobby.) They’ll scan and pick out us humans up ahead and make us eat their digital dust, if they choose. Or they’ll chauffeur our miserable hides straight to the police if we act up, as did Tom Cruise’s 2054 Lexus in Minority Report. What’s to stop them?

Yes, the rise of the machines seems inevitable. But as my race with Robby showed, some humans will still put up a fight before going to the scrap heap.

929

Friday, April 1, 2016

The Tesla Model 3

       The $35,000 Tesla Model 3 is finally here. It is sleek, quick as hell, and meant for the masses. And it is the most important car the company will ever build.
The Model 3 is the car Tesla Motors has promised since the company’s founding, the car that CEO Elon Musk is convinced will push EVs into the mainstream and the technology to an inflection point. Not to put too fine a point on it, it is the car Musk believes will change the world.

“It’s very important to accelerate the transition to sustainable transport,” Musk said on stage. “This is really important for the future of the world.”

In person and on paper, the Model 3 is a stunner. It’s a handsome sedan, with four doors and five seats, and all the comfort and practicality you’d expect of an upscale mid-size sedan. The battery is good for a 0 to 60 mph time under six seconds, a range of 215 miles. It’s packed with tech, stylish, and a bargain if Tesla can deliver it at the $27,500 base price Musk promises you’ll pay after the federal tax credit.The specs and price are key, because so far Tesla Motors has aimed squarely at the affluent. The company’s first three models—the innovative Roadster sports car, exquisite Model S sedan, and tech-slinging Model X SUV—made electric cars fun, cool, and compelling. The Model 3 is meant to do something greater: sell the masses on electric propulsion.
Tesla is hardly alone in hoping to do this and, frankly, got beaten in the race to build a $30K EV wth a triple-digit range by General Motors. In January, the Detroit stalwart introduced the 2017 Chevrolet Bolt, a battery electric hatchback with a range of 200 miles and a price of 30 grand after the $7,500 federal tax credit.
Still, Musk isn’t the slightest bit worried and, to be fair, has little reason to be. The Bolt is lovely, but Tesla has a proven ability to get people excited, and there’s no denying the company has a cachet many automakers do not. You don’t often see people lining up outside dealerships simply to place a $1,000 deposit on a car they haven’t even seen—something that happened at many Tesla stores this week. By the time Musk pulled the sheet off the color Model 3 at the sprawling Space X campus here in Hawthorne, California, 115,000 customers had put their money down.

“They’ll absolutely have a wow factor, because it’s Tesla,” says Gary Silberg, an automotive analyst with KPMG. “They’ll know how to market it, and from that perspective, there’s no doubt in my mind it’s gonna be a big success.”

Tesla doesn’t have to worry about creating a market for the 3. Nor does it have to worry about actually building it. No, the upstart automaker has to do something much harder.
If the company is to truly influence, let alone change, how humanity moves around, it must become more than a niche automaker building luxury vehicles and playing gadfly to the big players. That means producing vehicles on a massive scale and generating sustainable profits. To do that, Tesla must think and act a lot more like the very automakers Musk is so quick to ridicule as out-dated and old-fashioned.

It’s time for Tesla to grow up.

558

Saturday, March 26, 2016

The super-efficient autonomous intersection

       As you know if you've driven anywhere ever, traffic lights are part of a vast conspiracy designed to make it as difficult and time consuming as possible for you to get where you want to go. Lights change from green to red because someone might be coming from another direction, which isn't a very efficient way to run things, since you spend so much of your travel time either slowing down, speeding up, or stopped uselessly.

The only reason that we have to suffer through red lights is that humans in general aren't aware enough, quick enough, or kind enough to safely and reliably take turns through intersections at speed. Autonomous cars, which are far better at both driving and cooperating, don't need these restrictions. So, with a little bit of communication and coordination, they'll be able to blast through even the most complex intersections while barely slowing down.

These autonomous intersections are "slot-based," which means that they operate similarly to the way that air traffic control systems at airports coordinate landing aircraft. Air traffic controllers communicate with all incoming aircraft, and assign each one of them a specific place in the landing pattern. The individual aircraft speed up or slow down on their approach to the pattern, such that they enter it at the right time, in the right slot, and the overall pattern flows steadily. This is important, since fixed-wing aircraft tend to have trouble coming to a stop before landing.

The reason that we can't implement this system in cars is twofold: we don't have a centralized intersection control system to coordinate between vehicles, and vehicles (driven by humans) don't communicate their intentions in a reliable manner. But with autonomous cars, we could make this happen for real, and if we do, the advantages would be significant. Using a centralized intersection management and vehicle communication system, slot-based intersections like the one in the video above could significantly boost intersection efficiency. We've known this anecdotally for a while, but a new paper from researchers at MIT gives our hunches about the increase in efficiency empirical heft. The MIT team also suggests ways in which traffic flow through intersections like these could be optimized.

Rather than designing traffic management systems so that they prioritize vehicle arrival times on a first come, first served basis, the researchers suggest sending vehicles through in batches—especially as traffic gets heavier. This would involve a slight delay for individual vehicles (since they may have to coordinate with other vehicles to form a batch), but it's more efficient overall, since batches of cars can trade intersection time better than single vehicles can. The video above shows the batch method, while the video below (from 2012 research at UT Austin) shows a highly complex intersection with coordination of single cars rather than batches.

Simulations suggest that a slot-based system sending through groups of cars could double the capacity of an intersection, while also significantly reducing wait times. In the simplest case (an intersection of two single-lane roads), cars arriving at a rate of one every 3 seconds would experience an average delay of about 5 seconds if they had to wait for a traffic light to turn green. An autonomously controlled intersection would drop that wait time to less than a second. Not bad. But the control system really starts to show its worth as traffic increases. If a car arrives every 2.5 seconds, the average car will be delayed about 10 seconds by a traffic light, whereas the slot-based intersection would hold it up for a second and a half. And as the arriving cars start to overload the capacity of our little intersection at 1 car every 2 seconds, you'd be stuck there for 99 seconds (!) if there's a light, but delayed only 2.5 seconds under autonomous slot-based control.

We should point out that this only works if all of the cars traveling through the intersection are autonomous. One human trying to get through this delicately choreographed scrum would probably cause an enormous number of accidents due to both lack of coordination and unpredictability. For this reason, it seems likely that traffic lights aren't going to disappear until humans finally give up driving. An interim step, though, might be traffic lights that stay green (in all directions) as long as only autonomous cars are passing through them, reverting to a traditional (frustrating and inefficient, that is) state of operation when a human approaches.

The researchers point out that the advantages of slot-based control go beyond just saving time: they reduce fuel consumption, and along with it, the amount of carbon that would otherwise get pumped into the atmosphere by legions of cars idling at traffic lights.

It will take a lot of work to implement something like this. And because it's heavily dependent on having autonomous cars replace today’s human-controlled vehicles, let's hurry up and let the robots take over so that we can all benefit.

882

Friday, March 25, 2016

Nanocones and what they mean for solar power

       Researchers at the Royal Melbourne Institute of Technology (RMIT University) in Australia have created an entirely new nanostructure they have dubbed a “nanocone”. It combines the upside-down physics of topological insulators with the easier-to-explain process of plasmonics. The result is a nanomaterial that can be used with silicon-based photovoltaics to increase their light absorption properties.

Topological insulators have the peculiar property of behaving as insulators on the inside but conductors on the outside and plasmonics exploits the oscillations in the density of electrons that are generated when photons hit a metal surface. What the RMIT researchers have done by bringing these worlds together is create a plasmonic nanostructure that has a core-shell structure that lends itself to being topological insulator.

“This is the first time that a nanocone with intrinsically core-shell structure has been fabricated,” said Min Gu, the RMIT professor who led the research, in an e-mail interview with IEEE Spectrum. “The nanocone has a topologically protected metallic shell and a dielectric (insulating) core. They do not need a particular fabrication method and the unique nanostructure has the intrinsic properties of topological insulators.”

The topological insulator nanocone arrays could enhance the light absorption of solar cells by focusing incident sunlight into the silicon, according to Gu.

In research described in the journal Science Advances, this enhanced light absorption is achieved by the insulating core of the cone providing an ultrahigh refractive index in the near-infrared frequency range. Meanwhile, the metallic shell provides a strong plasmonic response and strong backward light scattering in the visible frequency range.

The researchers predict that then when a nanocone array is integrated into a silicon thin-film solar cell, it can help enhance light absorption for the cell up to 15 percent in the ultraviolet and visible ranges.

“With the enhanced light absorption, both the short circuit current and photoelectric conversion efficiency could be enhanced,” said Gu.

In future research, Gu and his colleagues plan to investigate plasmonics in other types of topological insulator nanostructures, such as nano-spheres and nano-cylinders and try to achieve plasmonic nanostructures that respond to a broad spectrum of light: from ultraviolet down to THz in all in a single core-shell nanostructure. He added: “In particular, we want to apply these nanostructures into ultra-thin PV devices.”

379

Thursday, March 24, 2016

Warp Speed: Science Fiction and Reality

        Faster than light travel has appeared in multiple Sci-Fi shows and movies, however, it hasn’t been figured out yet. NASA has made plans for FTL-drive starships, but very few of them are within reach because many of them are too expensive, they break the laws of physics, or we don’t have that level of technology yet.
So, the verdict is we either need to figure out how to break the laws of physics or create something that goes near the speed of light (NFTL). My idea for an NFTL system is like a slingshot. Like the Large Hadron Collider (LHC) in Geneva, Switzerland, only bigger, this slingshot would accelerate a ship inside its main tube, then it would shoot it very much like a softball pitch across the stars.

On the other side of this field, we could use a theoretical Boson Cloud Exciter, which I heard about on a TV show called Eureka. This BCE would theoretically be used as a catcher’s mitt for a FTL jump. Another idea would be to figure out how to convert an entire ship into photos or some other type of energy that can travel at or faster than the speed of light, shoot it at a target planet, where it would be converted back into the original matter of the ship and its content. The risk with this is if something blocks the some of the particles, then the ship and all of its content would be scattered throughout space.
NASA had an idea of having a starship project a “warp bubble” that has positive particles behind it pulling and negative particles in front pulling. However, physics once again caused problems, because you can’t project something in front of you at the speed of light because it breaks the law of special relativity.


This warp bubble is actually called Alcubierre’s Warp drive and it’s used like a “moving sidewalk”. As an example, imagine you are on one of those moving sidewalks that can be found in some airports. Although there may be a limit to how fast one can walk across the floor (light speed limit), what if you are on a moving section of floor that moves faster than you can walk (moving section of spacetime)? In the case of the Alcubierre warp drive, this moving section of spacetime is created by expanding spacetime behind the ship (coming out of the floor), and by contracting spacetime in front of the ship (back into the floor). This has roots in the Big Bang (inflationary universe), in which the universe inflated faster than the speed of light.
So, to attain NFTL or FTL travel, much more research, attention, and money will need to be pumped into this technology and related technologies.
For more information check out the NASA website: http://www.nasa.gov/centers/glenn/technology/warp/warpstat_prt.htm

476

Sunday, March 13, 2016

It's alright, the Chinese malware won't hurt your selfies

       The security track record of Apple’s locked-down mobile operating system has been so spotless that any hairline fracture in its protections makes headlines. So when security researchers revealed that a new flavor of malware known as AceDeceiver had found its way onto as many as 6.6 million Chinese iPhones, the news was covered like a kind of smartphone bird flu, originating in Asia but bound to infect the globe. But for iPhone owners, the lesson is an old one: Don’t go to extraordinary lengths to install sketchy pirated apps on your phone, and you should be fine.

“Everyone’s blown this way out of proportion,” says iOS security researcher and forensics expert Jonathan Zdziarski. “In its current form, this isn’t dangerous except to the exceptionally stupid.”

Researchers at Palo Alto Networks on Wednesday published a detailed blog post revealing that Chinese software has been using a set of clever techniques to bypass Apple’s security restrictions. The hack was pulled off by the developers of a Chinese-language desktop program for Windows called AiSiHelper, designed to interface with iPhones to let anyone jailbreak phones, back them up, and install pirated apps. When AiSiHelper is installed on a PC and an iPhone or iPad is connected to it, the desktop program automatically plants its own rogue third-party app store app on your iPhone or iPad, which then prompts you for your AppleID and password and sends any credentials you enter to a remote server. (Palo Alto Networks notes that it’s not clear if those credentials have yet been abused for fraud.)

To circumvent Apple’s installation restrictions, the AiSiHelper developers used two significant tricks: They snuck three versions of their app into the App Store by making them appear to Westerner as benign wallpaper apps while hiding their password-demanding features in the versions tailored to the Chinese market. And more importantly, they took advantage of a man-in-the-middle vulnerability in Apple’s Fairplay anti-piracy system that allowed the developers to continue to install their apps on iPhones from their desktop software even after the apps had been detected by Apple and removed from the app store. Apple didn’t respond to WIRED’s request for comment on that Fairplay vulnerability or the company’s failure to catch the sketchy apps in its App Store code reviews.

According to Palo Alto Networks, AiSiHelper has 15 million downloads and 6.6 million active users, and its rogue app installation targets people in mainland China. It’s not the first time that unsavory developers have taken advantage of the popularity of pirated apps in China to spread nasty code: A piece of password-stealing malware infected 225,000 jailbroken iPhones last year. But AceDeceiver has spooked the security community by breaking Apple’s security restrictions even on non-jailbroken iPhones.

Security researchers are more concerned that AceDeceiver’s disturbingly clever techniques could be replicated to attack people who weren’t already seeking to install unauthorized apps on their phone. If hackers could quietly install a piece of malware on your desktop machine—as opposed to Chinese iPhone owners’ voluntary installation of AiSiHelper on their PCs—they might be able to pull off the same Fairplay man-in-the-middle trick to inject malicious apps onto your iPhone, too. “It’s likely we’ll see this start to affect more regions around the world, whether by these attackers or others who copy the attack technique,” wrote Palo Alto researcher Claud Xiao in the firm’s blog post.

Despite AceDeceiver’s innovations, however, even Palo Alto’s own researchers admit that it doesn’t pose much of a very realistic threat to anyone who’s not actively seeking to put shady apps on their device. Instead, argues Palo Alto researcher Ryan Olson, it’s more likely that incautious people like those who installed AiSiHelper will again use the technique to install pirated, unauthorized programs that come with unwanted side effects. “We likely will see this attack used again in the future, but …it’s probably going to be in a similar model,” says Olson. “People installing software to pirate apps which abuses this loophole and may introduce malicious behavior, rather than widespread infections.”

As for the scenario where the same technique is repurposed by invisible desktop malware to smuggle an evil app onto the user’s iPhone, iOS security researcher Zdziarski argues it’s possible, but farfetched. The technique would first require sneaking that evil app past Apple’s app store security review. The victim’s desktop machine would have to be infected with malware. And even then the malicious app would be restricted to its own “sandbox” on the device and unable to access other apps’ processes or data. And if an attacker has access to a desktop, Zdziarski points out, why try to install a rogue app when he could just install ransomware or spyware directly on the PC, or even take iCloud tokens from the computer to steal the person’s iPhone’s secrets? “The technical capability is there, but I’m not sure how useful this is to an attacker,” Zdziarski says. “Why screw around installing an app that asks for their password when you already have full access to their data?”

In other words, it’s unlikely that AceDeceiver’s techniques would make an attacker’s job easier unless someone is actively seeking to circumvent Apple’s protections. The lesson for iPhone owners remains: If you don’t want rogue apps plaguing your pristine device, don’t go looking for them.

883

Saturday, March 12, 2016

An Actual Hoverboard

       In the fall of 2014, Arx Pax unveiled what was essentially the first real, working hoverboard. It used proprietary “Magnetic Field Architecture” which enabled its Hover Engines to float over a passive conductive surface (copper or aluminum, but copper works best). It’s the board you saw Tony Hawk ride in a metal half-pipe, and it lifted the likes of Buzz Aldrin and a sumo wrestler.

We All Float On
For starters, you can see that it’s more skateboard-ish. It uses a traditional longboard deck which is mounted to the battery-pack body. The four Hover Engines are spread out a little wider to add stability, but they are now attached to the main body via skateboard trucks. Those trucks tilt the Hover Engines and actually allow you to steer, accelerate, and brake.

Basically, it creates little electromagnetic waves and you get to surf them. Which is awesome. Note that neither copper nor aluminum are “magnetic.” They are, however, conductive. The Hover Engines on the board create a magnetic field, and when that field interacts with a conductive surface it creates small closed loops of electricity called “eddy currents.” The eddy creates a secondary magnetic field within the conductive surface, and because the two fields are essentially mirror images of each other, they repel each other. Lift is generated—and motion, if it’s angled properly. Basically, it creates little electromagnetic waves and you get to surf them. Which is awesome.
The press saw it working on a smaller scale with Arx Pax’s developer hardware, known as the Whitebox+. As you might guess, the Whitebox+ is, well, a white box. It’s 10 x 10 inches across the top and 5 inches deep. It uses the same technology as the larger board, but everything’s shrunk down to a smaller size so developers can test their tweaks without risking injury. It has four miniature Hover Engines on the bottom, and you can control it with a standard dual-joystick radio control. As the engines tilt, the box scoots off in the direction they’re leaning. It felt much like flying your standard quad-copter.

The point was further driven home when I tried another developer device, the Pika. It’s just a single Whitebox-sized Hover Engine set into a 3D printed housing so you can hold in your hand. When you tilt it in different directions over the floor, you and really feel it push against you. It offered much better lateral thrust than the dumb electric leaf-blower stunt I tried.

It may look like the Hendo 2.0 doesn’t float as high as the Hendo 1.0 did, but that’s not exactly true. New on the Hendo 2.0, the Hover Engines each reside in their own little housings. This protects each engine from bumps and significantly reduces the noise level (the 1.0 had a deafening shriek). The housings extend below the pods a little bit, which make it look like it’s not hovering as high.

But as cool as the hoverboard is (and it is), what Arx Pax is really selling here is the Hover Engine. It’s something the company thinks would be especially adept at transporting goods and humans. Specifically, its taking aim at traditional maglev systems, which often require a powered track. Since Arx Pax’s Magnetic Field Architecture system requires only a passive conductive system, it may have lower power needs. The MFA system also has the advantage of being omnidirectional in its propulsion, so it could not only carry cars along a track, but it could theoretically leave the track once it reached its destination terminal, and then carry riders off to their specific destination, self-driving car style. Of course, that would only work over conductive surfaces, and U.S. roads currently aren’t equipped for the job.

The place we’re most likely to see this technology applied is in a system like the proposed Hyperloop, and it shouldn’t surprise you that Arx Pax is courting Hyperloop designers hard.

In January, its co-founder and CEO Greg Henderson sent out an open letter to the Hyperloop community extolling the virtues of MFA to participants in Texas A&M’s Hyperloop pod design competition. The company even built its own pod for the competition to demonstrate its unique capabilities. Arx Pax is reportedly in talks with most of the winners, so we may well see a number of the Hover Engines in action in this summer’s upcoming Hyperloop pod race in Hawthorne, CA. Arx Pax is selling its Hover Engine kits for $20,000 each. That’s a significant chunk of change, but Henderson says they’re still having to hustle to keep up with demand.

I assumed Henderson was merely pitching Magnetic Field Architecture as a means of levitation, but then the pod would require an additional method of propulsion to reach the high speeds (over 700 mph) that Elon Musk and others have quoted. He said he thinks the MFA system would be enough to support both levitation and propulsion. “To date we have modeled speeds up to 500 mph with some very promising results...I predict that before the Hyperloop is built, we will have technology limited more by time and the human body’s tolerance of G-forces than in the speed of our propulsion systems.”

Other places we might see this technology? Surprisingly, Arx Pax is looking at some biotech applications. Lots of animals use magnetic fields to aid in navigation. One of those animals is the mosquito. The company is exploring the possibility that its technology could be a chemical-free way of mitigating the number of mosquitoes in a certain area.

As for the hoverboards? Well, ten lucky, well-heeled Kickstarter backers are each receiving their own Hendo 2.0, having spent over $10,000 each in the crowdfunding campaign. And of course Arx Pax will keep a few on hand for demonstrations, but these will continue to be very rare beasts. Maybe someday we’ll see lavish, all-copper skate parks emerge where kids can rent these boards and experience this incredible gliding sensation. For now, though, it’s a spoonful of sugar to get us talking about hover technology, which isn’t such a bitter pill to swallow anyway.

1015


Friday, March 11, 2016

Bananas as game controllers

       What if you could play a videogame with a banana? Or a drum machine with a burrito? MaKey MaKey lets you do exactly that—and more.

MaKey Makey turns everyday objects into digital touchpads for your computer. Jay Silver and Eric Rosenbaum designed it while they were in grad school at MIT. Inspired by the maker movement, they wanted to create an open-ended way of getting people to think creatively about how kids interact with our increasingly networked world. The result is a clever kit that makes a controller of literally anything that conducts electricity.

“Makey Makey is a device for allowing people to plug the real world into their computers,” said David Ten Have of JoyLabz, which produces the kit. “We want people to be able to see the world as their construction kit. And basically the way MaKey MaKey works is that it pretends to be a USB keyboard.”

Each kit comes with a circuit board—the heart of the kit—a set of alligator clips, and a USB cable that plugs into your computer. The USB provides power to the circuit board, and the alligator clips link the board to any object that conducts electricity. Turns out, a lot of things conduct electricity. Food. Plants. Play-Doh. Even you. “We were shooting for creating a product that had an incredibly low floor for participation, but an incredibly high ceiling for expression,” said Ten Have. “We’re seeing things made as simple as a banana piano and as advanced as measuring tools for chemistry labs, for instance.”

The possibilities seem endless. Kids can turn their art into an instrument. Connect the alligator clips to people’s hands to make a human drum machine. People have transformed a trash can into a calculator and a slice of pizza into a game controller. Your garbage can is full of new controllers.

309

Sunday, March 6, 2016

Hyperloop's First Stop... Slovakia

When Elon Musk proposed his wild idea for the Hyperloop almost four years ago, he billed it as an unbelievably cool way to get from San Francisco to Los Angeles in 30 minutes. But the first place to adopt the futuristic tech might in fact be … Slovakia.

Now, the phrase “part of the former Soviet bloc” may not bring to mind the ideal setting for a revolution in transportation tech. But the country’s position as one of Europe’s fastest-growing economies makes it a natural fit, says Dirk Ahlborn, who leads Hyperloop Transportation Technologies and sees the system carrying passengers and freight between between Bratislava and Vienna or Budapest in 10 minutes or less.1 “I personally think it’s a great place for it,” he says.

For anyone who hasn’t heard, Hyperloop is a conceptual high-speed transportation system that would fling people and cargo across great distances at triple-digit velocities, through tubes with close to zero air pressure inside. It works something like the pneumatic tubes banks once used. Yes, it sounds like science fiction, but several companies are pursuing it (Musk basically floated the idea, and invited people to run with it), and appear to be making progress.

We would love to see LA to San Francisco, but our primary goal is to build the Hyperloop.
Hyperloop Transportation Technologies isn’t your typical tech startup. It has just two full-time employees. The real work is done by more than 500 engineers with day jobs at places like NASA, Boeing, and SpaceX. They spend their free time working on Hyperloop in exchange for stock options because they get to work on something that could genuinely revolutionize transportation. In August, Ahlborn announced partnerships with Oerlikon Leybold Vacuum and global engineering design firm Aecom, which suggests the idea is attracting resources from companies with stockholders to answer to. Hyperloop Transportation Technologies plans to start building a prototype in California later this year.

So if California works for the test site, why not for the real thing? It would certainly be helpful. Traveling between SF and LA requires taking a 12-hour Amtrak ride, a six hour (on a good day) drive, or an hour-long flight that requires braving traffic to and from an airport and dealing with the TSA. And it’s not like that high-speed train we’ve been promised is coming anytime soon. In short, people would go fully bonkers for a Hyperloop route.

But it’s not feasible, at least not now. The same political battles that have pushed high-speed rail two years (and counting) behind schedule would certainly ensnarl Hyperloop.

Land is expensive (making right-of-way acquisition tricky), earthquakes are likely (raising questions about the safety of an unproven technology), and you can bet there would be no end of NIMBYism. All of which makes California a terrible place for beta testing.

Ahlborn’s competitor, which is called Hyperloop Technologies (there’s something of an “Original Ray’s Pizza” thing going on here) is taking a similar approach. CEO Rob Lloyd wants to have three working Hyperloops in place by 2020, and isn’t talking about connecting California’s big metropoles. He hasn’t specified any potential sites, but says a combination of government support, regulatory approval, and available capital are prerequisites.

“We would love to see LA to San Francisco, but our primary goal is to build the Hyperloop,” Ahlborn said in December, 2014. There’s no point in taking on tough political battles when other places are waving the technology in. Ahlborn says Slovakia has promised to handle securing land for the project, and various government officials seem psyched. “A transportation system of this kind would redefine the concept of commuting,” Vazil Hudak, the country’s minister of economy, said in an announcement.

The first stage of the Slovakia Hyperloop will run within Bratislava and cost $200-300 million, Ahlborn says. Connections to Vienna and Budapest would follow. He hasn’t pulled together the funding, but wants to see stage one built by 2020—nine years before California’s high-speed rail would be finished.

663

Saturday, March 5, 2016

Russia's Nuclear Space Rocket

        Russia has an idea that could change the trip to Mars. Last week, their national nuclear corporation Rosatom announced it is building a nuclear engine that will reach Mars in a month and a half—with fuel to burn for the trip home. Russia might not achieve its goal of launching a prototype by 2025. But that has more to do with the country’s financial situation, which is really not great, than the technical challenges of a nuclear engine. Actually, Soviet scientists solved many of those challenges by 1967, when they started launching fission-powered satellites. Americans had their own program, called SNAP-10A, which launched in in 1965. Ah, the Cold War.

Both countries prematurely quashed their nuclear thermal propulsion programs (Though the Soviets’ lasted into the 1980s). “Prematurely” because those fission systems were made for relatively lightweight orbital satellites—not high-thrust, interplanetary vessels fattened with life support for human riders. Nonetheless, “A nuclear contraption should not be too far off, not too complicated,” says Nikolai Sokov, senior fellow at the James Martin Center for Nonproliferation Studies in Monterey, CA. “The really expensive thing will be designing a ship around these things.”

Nuclear thermal is but one flavor of nuclear propulsion. Rosatom did not respond to questions about their system’s specs, but its announcement hints at some sort of thermal fission. Which is to say, the engine would generate heat by splitting atoms and use that heat to burn hydrogen or some other chemical. Burning stuff goes one direction, spaceship goes the other.

The principle isn’t too far from chemical propulsion. The fastest chemical rockets produce thrust by igniting one type of chemical (the oxidizer) to burn another (the propellant), creating thrust. Chemical or otherwise, rocket scientists rate propulsion methods based on a metric called Specific Impulse, “Which means, if I have a pound of fuel, for how many seconds will that pound of fuel create a pound of thrust,” says Robert Kennedy, a systems engineer for Tetra Tech in Oak Ridge, TN, and former congressional fellow for the US House of Representatives’s space subcommittee. For instance, one pound of the chemical mixture powering the Space Launch System—NASA’s in utero rocket for the agency’s planned mission to Mars—produces about 269 seconds of thrust in a vacuum.

But the outcomes of those two methods are radically different, because chemical rocketry has a catch-22. The faster or farther you want to go, the more fuel you need to pack. The more fuel you pack, the heavier your rocket. And the heavier your rocket, the more fuel you need to bring…

Eventually, the equation balancing thrust to weight plateaus, which is why a year and a half is around the lower time limit for sending a chemically propelled, crewed mission to Mars. (Until Elon Musk’s spiritual descendants build asteroid-mined interplanetary fuel stations.) And that’s not even considering the incredible cost of launching fuel—about $3,000 a pound. Expensive, but the politics surrounding nuclear make it a harder sell in America, so NASA is stuck with the Space Launch System (and its thirsty fuel tanks) for now.

512

Friday, March 4, 2016

Atomic Bombs actually did something good

        From 1945 to 1963, the United States and the Soviet Union detonated over 400 nuclear bombs aboveground in a very bad, no good, uh, weapons-measuring contest that the world never wants to happen again. Atolls were destroyed, steppes contaminated, and fallout spread around the world. But what if some good came out of those nuclear detonations?

No, seriously.

When all those nukes were detonated, they spit out neutrons that set off a chain of events to create carbon-14, an exceedingly rare isotope of carbon. The amount of carbon-14 in the atmosphere spiked. That carbon-14 reacted with oxygen to make carbon dioxide. Plants breathed it in, animals ate the plants, and humans ate the animals and plants. “Everyone alive has been labeled with it,” says Bruce Buchholz, a scientist at Lawrence Livermore National Laboratory. “The entire planet has been labeled.”

Unlike the short-lived radioactive particles that come out of nuclear detonations, carbon-14 is not dangerous to humans—but it is very useful, because levels of the isotope have been slowly falling off since the 1950s spike. By measuring the amount of carbon-14 in a sample, scientists can pinpoint its age within just a year or two. They can use the method to find the age of an unidentified body using tooth enamel, or study how often human fat cells are born and die, or discover how old trafficked ivory is. It works with literally anything that has carbon.

So every Monday afternoon, Buchholz and his colleagues at Lawrence Livermore fire up the accelerated mass spectrometer, the machine that measures carbon-14. They get samples from a hundred different scientists, law enforcement groups, and government agencies. That adds up to about 100 to 150 samples per week. The machine runs continuously through Thursday, day and night. They’re racing to do research that will soon be impossible.

A Modest Proposal, Part I
Back in the ‘90s, it was Buchholz who pioneered the idea of using carbon-14 to date biological samples. He, a nuclear engineer, met a neuroscientist at a chemistry conference, and they began one of those beautiful multidisciplinary collaborations to study the age of plaques found in the brains of Alzheimer’s patients. The carbon-14 work, however, really took off when Jonas Frisén, a stem cell biologist at the Karolinska Institute, read Buchholz’s papers. Frisén studies whether adults can grow new brain cells. Neuroscience had long held as dogma that humans are born with all the neurons they will ever have. But studies in mice and rats were beginning to upend that idea. Friesen looked at Buchholz’s papers and saw a way to measure the age of neurons. If you could find 20-year-old neurons in the brain of a 70-year-old, then you have proof of that brain cells generate throughout an animal’s lifetime.

In a few years, some of this research will be impossible.
But no one knew yet if carbon-14 worked for dating brain tissue, and that task fell on Kirsty Spalding, then a postdoc in Frisén’s lab. Starting with human brains was no go, so they started with horses. Horses, like humans, live for decades. Spaulding would drive up to a slaughterhouse an hour out of Stockholm and collect horse heads. “It was not my most fun project,” she says. “Especially terrible as a vegetarian going to the slaughterhouse.” But the method worked. Inside the brains of a horse six years old and another 19 years old, they found measurable differences in the amount of carbon-14.
The work moved to humans. And Spalding found that a region of the brain called the hippocampus, involved in memory, indeed generated new neurons in adulthood. Frisen has since done similar work in heart cells, also thought to rarely—if never—regenerate. Spalding how has her own lab studying fat cells. Carbon-14 dating is a key method for understanding how tissues grow and re-grow in the body.

But carbon-14 in the atmosphere is still falling—by now, it is only 4 or 5 percent above pre-atomic age levels. In a few years, some of this research will be impossible. “That’s something we emphasize in grant applications,” says Frisén. “We need the money now.”

A Modest Proposal, Part II
So what if the someone, you know, detonated another bomb? For the sake of science! (And only science.)

“We’re talking to North Koreans about what they can do,” jokes Frisén. But more seriously, he says, “I would prefer that not happening. You would need dozens. It’s not anything we’re counting on.” Indeed, dozens, if not hundreds, of bombs are necessary to produce a big enough spike in carbon-14 to be useful for dating. And powerful ones too—the ones detonated over Hiroshima and Nagasaki are too small. Accidental radioactive leaks, like Chernobyl or Fukushima, make no difference, because it’s the detonation that produces the neutrons necessary for carbon-14.

But what if—if!—you could detonate a few dozen nuclear bombs? Could you safely do it somewhere remote? Well, not likely.

The United States’ atmospheric testing in the Nevada desert likely contributed to a 10 percent increase in thyroid cancers in exposed people. In the Marshall Islands, the testing caused an additional 1 percent increase in cancer. “No matter how remote, the contamination will reach people somewhere,” says Steve Simon, a radiation epidemiologist at the National Cancer Institute, who led the Marshall Islands Nationwide Radiological Study.

Detonating a nuclear bomb high in the atmosphere—in space, essentially—would reduce radioactive fallout and contamination. The US actually did several high-altitude tests back in the 1950s and 60s. But nukes in space? Think of the satellites! The high-altitude test Starfish Prime knocked out several satellites in 1962, and that’s not something to repeat (on purpose).

So the window on carbon-14 dating is slowly closing. But that it was ever open in the first place was an odd, unintentional consequence of the atomic age.

973

Saturday, February 27, 2016

The F.B.I. just did the impossible

        Tech companies aren't exactly known for playing nice with one another. Apple and Microsoft were once the fiercest rivals of personal computing. These days, Apple is up against Google in smartphones, while Google and Microsoft battle it out over business software. Amazon is fighting Apple when it comes to devices and media, and Microsoft when it comes to cloud hosting services (among other markets). Facebook is crushing Twitter in terms of social media users, and now Facebook and Google are vying for more mobile Internet eyeballs and ad dollars.
It's a rare day when all six of these companies can agree on something, but that day seems to have arrived, thanks to the U.S. Federal Bureau of Investigation.
        Last week, a judge ordered Apple to comply with an FBI request that Apple help circumvent security features on an iPhone used by a San Bernardino shooting suspect. Today, Apple filed a motion to dismiss the order to help unlock the iPhone for the FBI, and now Microsoft, Google, Facebook, Amazon, and Twitter all appear to be uniting behind Apple's move.
Google first showed tepid support for Apple in this fight last week, when CEO Sundar Pichai tweeted that "forcing companies to enable hacking could compromise users’ privacy."
        Then it seemed like Microsoft might be siding with the FBI, when co-founder and former CEO Bill Gates gave an interview to the The Financial Times earlier this week saying that the FBI was only looking for Apple's help in this specific case. Apple has repeatedly contended that what the FBI is asking for — a way to bypass the auto-deletion feature on an iPhone 5C when a password is guessed incorrectly too many times — could be used to compromise other iPhones, including the hundreds of millions used by Apple's customers around the world.
        But Gates, who is now just an "advisor" at Microsoft, later walked back those remarks, and today Microsoft's president and chief legal officer Brad Sims testified before Congress that his company "wholeheartedly" supports Apple and will be filing an amicus brief in the court case to that effect (amicus briefs are legal documents filed by parties who aren't directly involved in the case, but who have a strong interest in the outcome and may be affected by it.)
        As it turns out, Microsoft isn't the only one about to do this: now Google, Facebook, and Twitter, are all coming together to file a joint amicus brief in support of Apple, according to USA Today. Amazon is also said to be working on "amicus brief options," according to a spokesperson who spoke to Buzzfeed.
        Other smaller tech companies and digital advocacy groups including the Electronic Frontier Foundation have also said they plan to support Apple by filing such amicus briefs as the case moves forward.

466

Friday, February 26, 2016

The Next B-2...Looks exactly like the B-2

        Three days ago, the U.S. Air Force released the first image of its B-21, the supplement to the tiny B-2 fleet already in service. Formerly known as the Long Range Strike Bomber, or LRS-B, the new, Northrop Grumman designed plane is now the B-21. If that sounds at all familiar, it’s because America’s last brand-new shiny Northrop Grumman designed bomber was the B-2 Spirit. With the shroud lifted off the new bomber, we can see that the B-21 looks...almost exactly like its predecessor.
B-21 Concept Art
                                   B-21                                                                          B-2

As I mentioned before, the B-2 fleet is tiny. Alas, the B-2 Spirit was a victim of its time: a highly advanced bomber that entered service right as the Cold War ended. As American security concerns switched from fears of Russian attack to worries about the side effects of Russian economic implosion, a top-of-the-line stealth bomber became the easiest fat to cut off the Pentagon’s budget. After just 21 planes were delivered, the program ended, leaving America with a super-fancy flying machine to show off at parades. Oh well. So, the B-21 will pick up where the B-2 left off, making it the B-2 the "iPhone 5S of bombers" (if Apple only came out with a new iPhone every 25 years). According to the USAF press desk, "designation B-21 recognizes the LRS-B as the first bomber of the 21st century," and is not a reference to the just 21 B-2s that were made. Again, it will supplement the tiny B-2 fleet, and replace the ancient B-52s still used by the Air Force today, as well as the older B-1 bombers. People have speculated a lot on the kind of tech the bomber will have, as well as whether it will be unmanned or not. In late 2014, Popular Science spoke with a senior defense official at the Pentagon involved in the program, who insisted that, when carrying a nuclear weapon, the bomber will have a human crew on board. The B-21’s Ace Combat 2-style concept art seems to confirm that, with windows visible on the plane. This matches the ad Northrop Grumman aired during the Super Bowl this year, which put cockpits for human pilots on their future fighters.
In a previous post I talked about the future of America's air force and had a paragraph on the SR-72. I want to make sure that people do not confuse the two, as the B-21 will not be capable of the speeds the SR-72 will be flying at.

417

Thursday, February 25, 2016

The Mozart Effect and You should be playing an instrument!

       You've heard of the Mozart effect, right? If not, here's a little background information for ya.
In the '90s, there was a study done by Rauscher, Shaw, and Ky that pointed to the theory that Mozart's music had an effect on spatial reasoning. Although the study only showed that it made temporary increases in spatial reasoning the findings were peddled to the masses through books and CDs under the false pretense that his music made your baby permanently smarter.
       A report, published in the journal Pediatrics, said it was unclear whether the original study in 1993 has detected a "Mozart effect" or a potential benefit of music in general. But they said a previous study of adults with seizures found that compositions by Mozart, more so than other classical composers, appeared to lower seizure frequency. One team said it was possible that the proposed Mozart effect on the brain is related to the structure of his compositions, as Mozart's music tends to repeat the melodic line more frequently. In more condemning evidence, a team from Vienna University's Faculty of Psychology analyzed all studies since 1993 that have sought to reproduce the Mozart effect and found no proof of the phenomenon's existence. Overall, they looked at 3,000 individuals in 40 studies conducted around the world. It did not even need Bach to have the same effect: "Those who listened to music, Mozart or something else – Bach, Pearl Jam – had better results than the silent group. But we already knew people perform better if they have a stimulus," said Jakob Pietschnig, who led the study. "I recommend everyone listen to Mozart, but it's not going to improve cognitive abilities as some people hope," he added. In 1999, psychologist Christopher Chabris, now at Union College in Schenectady, N.Y., performed a meta-analysis on 16 studies related to the Mozart effect to survey its overall effectiveness. "The effect is only one and a half IQ points, and it's only confined to this paper-folding task," Chabris says. He notes that the improvement could simply be a result of the natural variability a person experiences between two test sittings. In almost every study done, there were little to no improvements on any level.
        However, playing an instrument is a different ball game. Instead of listening to music passively, Rauscher advocates putting an instrument into the hands of a youngster to raise intelligence. She cites a 1997 University of California, Los Angeles, study that found, among 25,000 students, those who had spent time involved in a musical pursuit tested higher on SATs and reading proficiency exams than those with no instruction in music. Other effects of playing an instrument are learning how to listen, increased memory capacity, sharper concentration, growth of perseverance,  improvement of non-verbal communication skills, an increased sense of responsibility, better coordination, stress relief, growth of creativity, better memory recall, organizational skills and time management, help with crowd anxiety, people skills, and growing of a social network within the people you play with. In other words, either learn how to play an instrument or have your children learn how to play an instrument.

https://en.wikipedia.org/wiki/Mozart_effect
http://www.bbc.com/future/story/20130107-can-mozart-boost-brainpower
http://www.telegraph.co.uk/news/health/children/11500314/Mozart-effect-can-classical-music-really-make-your-baby-smarter.html
http://www.scientificamerican.com/article/fact-or-fiction-babies-ex/

520

Saturday, February 20, 2016

Icecube: In the Antartic

        The IceCube Neutrino Observatory is a massive neutrino detector in Antarctica that takes advantage of the fact that the south pole is covered in a medium through which charged particles can travel faster than light: ice. A kilometer down, the ice is beautifully clear, allowing bursts of Cherenkov radiation to propagate through it unhindered. The IceCube observatory itself consists of 5,160 digital optical modules, each about 25 centimeters in diameter, suspended on 86 individual strings lowered into boreholes in the ice. Each string sits between 1450m and 2450m below the surface, spaced 125m horizontally from neighboring strings, resulting in a neutrino detector that's a full cubic kilometer in size. What IceCube is looking for are the tiny flashes of blue light emitted by the electrons, muons, and tau particles flashing through the ice after a neutrino collides with a water molecule. These flashes are very dim, but there are no other sources of light that far down under the ice, and the photomultiplier tube inside each digital optical module can detect even just a handful of photons.
        Depending on what kind of subatomic particle the neutrino turns into, IceCube will detect different Cherenkov radiation patterns. An electron neutrino will produce an electromagnetic shower (or cascade) of particles. The muon produced by a muon neutrino, on the other hand, can travel hundreds of meters, leaving a track that points back along the same trajectory as the muon neutrino that created it. A tau neutrino will produce a sort of combination of these two signatures. Maybe. I think. Tau neutrinos are difficult to detect, because tau particles themselves are extraordinarily massive and short lived: they're something like 3,500 times the mass of an electron (and 17 times the mass of a muon), with a lifetime of just 0.0000000000003 second, which means that they decay into other subatomic particles virtually instantaneously and are easily mistaken for electron neutrinos. IceCube has some ideas of what unique radiation signatures might suggest the detection of a tau (including the "double bang," the "inverted lollipop," and the "sugardaddy"), but they haven't found one yet.

350

Friday, February 19, 2016

Detections of Cosmic Neutrinos

       Neutrinos are elementary particles (like quarks, photons, and the Higgs boson) that have no charge and virtually no mass. Since they're small, fast, and charge-free, they aren't affected by nuisances like electromagnetic fields, meaning that they can pass unmolested through rather a lot of pretty much anything. Some 65 billion solar neutrinos just passed through every square centimeter of your body, and if you wait a second, 65 billion more of them will do it again. The only way to bring a neutrino to a halt is if it runs smack into an electron or the nucleus of an atom, but this is ridiculously improbable: you'd need a piece of lead about a light year long to be reasonably sure catching any one specific neutrino. Fortunately, the enormous number of neutrinos that are flying through everything all the time compensates for the low probability of collision, and that has allowed us to learn some things about these elusive particles.
        We’re pretty sure that neutrinos come in three different (but equally tasty) flavors: electron, muon, and tau. Each flavor has a slightly different (but tiny) mass, on the order of a million times smaller than the mass of a single electron, and a neutrino can oscillate between these three flavors as it zips along. The original flavor that each neutrino takes depends on how it was created: most often, neutrinos are created through high energy nuclear processes like you'd find going on inside stars. To take one common example, protons and neutrons colliding with each other create pions, which are subatomic particles that decay into a mix of muon and electron neutrinos.
        The most common source for the neutrinos that we see here on Earth is the sun, which produces an electron neutrino every time two protons fuse into deuterium. (This happens a lot.) What's much rarer to see are neutrinos that aren't produced close to home—neutrinos that come from outside of our solar system, and even outside of our galaxy. These are called cosmic neutrinos, or astrophysical neutrinos.
        Cosmic Neutrinos
Cosmic neutrinos are born, we think, in the same sorts of ultra high energy events out in the universe that also generate gamma rays and cosmic rays. We're talking events like supernova remnant shocks, active galactic nuclei jets, and gamma-ray bursts, which can emit as much energy in a few seconds as our sun does over ten billion years. As you might expect, the resulting neutrinos have stupendously high energies themselves: a million billion electronvolts (1 petaelectronvolt) or so. That works out to be about a tenth of a millijoule, which is a lot for a particle that has an effective size of nearly zero, and it is about equivalent to the kinetic energy of a thousand flying mosquitoes, in case that horrific unit of measurement is of any help to you.
        The reason that cosmic neutrinos are important is the same reason that neutrinos themselves are so frustrating to measure: they ignore almost everything. This allows them to carry valuable information across the entire universe. What sort of information can be carried by a particle that we can't even measure directly? Here are three examples:
        Since neutrinos aren't affected all that much by even the densest matter, they can escape from the core of a supernova well before the shock wave from the inside of the collapsing star makes it to the outside and releases any photons. By detecting this initial neutrino burst, we can get a warning of as long as a few hours before a supernova would be visible from Earth.
Since neutrinos aren't affected at all by magnetic fields and therefore travel in straight lines, they can help us pinpoint the origin of ultra high energy cosmic rays, which are affected by magnetic fields and therefore can follow winding paths. We know that some cosmic rays come from supernovae, but many of them don't, and we're not sure where the rest of them originate. With energies over a million times greater than the Large Hadron Collider, it would be nice to know where they come from.
The ratio between different flavors of neutrinos may suggest how they were formed. For neutrinos produced by pion decay, we'd expect to see two electron neutrinos for every muon neutrino. If we see different ratios, it would suggest a different formation environment, and particularly weird ratios could even lead to new physics.
        Detecting Neutrinos
Since neutrinos don't really interact with anything, there's no reliable way to detect them directly. Whether they're coming from the sun, the atmosphere, or somewhere more exotic, the best we can do is try and spot the aftermath of a very unlucky neutrino smashing headlong into something while we watch.
        When a neutrino does smash into something, one of two different things can happen. If the neutrino isn't super energetic, it might just bounce off in a new direction, passing on some of its momentum and energy into whatever it hits (which recoils in response) and occasionally causing that thing to break into pieces. Some neutrino detectors are designed to watch for both of these effects. The other thing that can happen is that the neutrino obliterates itself, dissolving into a subatomic particle that depends on the neutrino's flavor-of-the-moment: an electron neutrino turns into an electron, a muon neutrino turns into a muon, and a tau neutrino turns into a tau particle, stripping electric charge off of whatever it hits as it does so. Some detectors look for this change in charge of the thing that the neutrino ran into (it's the only way of detecting neutrinos with energies less than 1 MeV), but the detector that we're interested in looks for traces of the ex-neutrino's subatomic particle itself.
        Detecting particles moving at very high speeds is a (relatively) straightforward thing, at least conceptually. The key is what's called Cherenkov radiation, which is created by charged particles moving through a medium faster than the speed of light in that medium. It's sort of like the sonic boom created by an object travelling through air faster than the speed of sound, except with with light. Here's what the Cherenkov radiation created by a burst of beta particles from a nuclear reactor being pulsed looks like:

So now that we've got some lovely blue glowy-ness to look for, all we need a medium through which charged particles can travel faster than light, and some kind of detection system that can spot the resulting Cherenkov radiation.

Continued in next post...

1081

Thursday, February 18, 2016

Cheap paper skin mimics the real thing

    Human skin’s natural ability to feel sensations such as touch and temperature difference is not easily replicated with artificial materials in the research lab. That challenge did not stop a Saudi Arabian research team from using cheap household items to make a “paper skin” that mimics many sensory functions of human skin. The artificial skin may represent the first single sensing platform capable of simultaneously measuring pressure, touch, proximity, temperature, humidity, flow, and pH levels. Previously, researchers have tried using exotic materials such as carbon nanotubes or silver nanoparticles to create sensors capable of measuring just a few of those things. By comparison, the team at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia used common off-the-shelf materials such as paper sticky notes, sponges, napkins and aluminum foil. Total material cost for a paper skin patch 6.5 centimeters on each side came to just $1.67.
        "Its impact is beyond low cost: simplicity," says Muhammad Mustafa Hussain, an electrical engineer at KAUST. “My vision is to make electronics simple to understand and easy to assemble so that ordinary people can participate in innovation.” The paper skin’s low cost and wide array of capabilities could have a huge impact on many technologies. Flexible and wearable electronics for monitoring human health and fitness could become both cheaper and more widely available. New human-computer interfaces—similar to today’s motion-sensing or touchpad devices—could emerge based on the paper skin’s ability to sense pressure, touch, heat, and motion. The paper skin could also become a cheap sensor for monitoring food quality or outdoor environments.
        Last but not least, cheap artificial skin could give robots the capability to feel their environment in the same way that humans do, Hussain says. In a paper detailing the research—published in the 19 February issue of the journal Advanced Materials Technologies—the researchers said:
       "The envisioned applications of such artificial skin takes a lot of surface area coverage (like                 robotic skins or skins for robots). There, lowering cost is crucial while not compromising                     performance. In that sense, if mechanical ruggedness can be proved, there is no scientific or                 technical reason for not accepting paper skin as a viable option."
The team’s low-cost approach often seems as simple as a classroom experiment. As an example, researchers built a pressure sensor by sandwiching a napkin or sponge between two metal contacts made from aluminum foil. The same simple device could also detect touch and flow based on changes in pressure. Its aluminum foil even allowed it to act as a proximity sensor for electromagnetic fields with a detection range of 13 centimeters.

431

Sunday, February 14, 2016

Intel's Core-M Processor - what you need to know

        Intel's new Core-M processor is at the heart of a laptop revolution - and a new wave of fanless computers. Running at only 4.5 watts, compared to the 11.5 watts of an i5 processor or 57 watts of a quad-core i7, this processor doesn't require a fan cooled heat sink. With its low power consumption and low heat generation, manufacturers can build laptops that are thinner than we’ve ever seen before. Going fanless allows manufacturers to build thinner devices that make less noise. For example, the latest MacBook half an inch thick; the second-generation Lenovo ThinkPad Helix is just .38 inches thick, compared to its .46-inch, Core i5-powered predecessor. Where the original Helix's keyboard dock had a hinge with dual fans built-in, the new Ultrabook Keyboard doesn't even have one fan. Last year's model lasted just 5 hours and 48 minutes when detached from its dock, but the Lenovo promises 8 hours of endurance from the Core M-powered Helix.
       Though Core M is the first processor based on Intel's new 14nm Broadwell architecture, it certainly won't be the last. In 2015, Intel plans to use Broadwell as the basis of its next-generation Core i3 / i5 / i7 chips for both laptops and desktops. Over the past few years, Intel has released a new processor architecture on an annual basis, with a die shrink occurring every other generation. The current Haswell and prior-gen Ivy Bridge architecture use a 22nm process while Broadwell will be the first to use 14nm. A smaller die size means that Intel can fit more transistors into a smaller space, using less power, generating less heat and taking up less space in the chassis. The Core M processor package eats up just 495 square millimeters of space, about half the size of the 960 square-millimeter 4th Generation Core Series package.

301

Saturday, February 13, 2016

Fusion - Germany's Wendelstein 7-X

       Yesterday, one of the grandest experimental fusion reactors in the world flared to life, converting hydrogen into a plasma for less than a second. The honor of pressing the button went to a PhD in Quantum Chemistry (who also happens to be the Chancellor of Germany), Angela Merkel.
Why such a high-profile ribbon-cutting? Fusion power is a kind of nuclear power source, the same thing that happens on a much larger scale in the hearts of stars. Theoretically, if you could get light atoms to fuse into heavier atoms, the energy produced by the reaction (which happens at immense temperatures and pressures) would be a clean source of energy that could continue almost indefinitely, without the radiation byproducts of nuclear fission (the method currently employed at nuclear power plants).
        The German experiment, called Wendelstein 7-X, received funding or components from Germany, Poland, and the United States. This is the first run with hydrogen, though it did some initial work creating helium plasma last year. Though the hydrogen plasma was short-lived, it was an exciting moment for researchers. “With a temperature of 80 million degrees Celsius and a lifetime of a quarter of a second, the device’s first hydrogen plasma has completely lived up to our expectations”, Hans-Stephan Bosch, head of operations for Wendelstein 7-X said. The Wendelstein 7-X is not designed to produce energy. Instead, the experiment is focused solely on producing and maintaining a levitating ball of super-heated plasma, which is a key step towards fusion energy.
The Germans aren't the only ones working on fusion, though. In France, the largest fusion reactor ever made, called ITER, is under construction. Private companies are in on the race too, with Lockheed Martin also working on a fusion reactor design.
        While they are both meant to achieve a similar goal, there are differences among the designs used by the various groups. One of the more popular designs is the tokamak, a Russian design used by ITER. It uses a doughnut-shaped machine to generate a magnetic field to contain the hot plasma. The Wendelstein 7-X, on the other hand, is a stellarator. While it is also a doughnut shape, it has the distinct advantage of theoretically being able to run indefinitely, instead of in pulses like tokamaks. If the Wendelstein 7-X succeeds in producing plasma for long amounts of time, (they hope to get up to 30 minutes by 2025 if not earlier) then it might show that the stellarator design could be used in future fusion power plants.

417

Thursday, February 11, 2016

Breaking News: Gravitational Waves - never seen before - "the breakthrough of the century"

        Today, scientific history was made. At 3:30pm today, the National Science Foundation had a press conference to "update the scientific community on efforts to detect” gravitational waves. They reported that, for the first time, scientists have observed these gravatational ripples in the fabric of spacetime, arriving at the earth from a cataclysmic event in the distant universe. This discovery confirms a major prediction of Albert Einstein’s 1915 general theory of relativity. It also opens up an unprecedented new view on the cosmos.
        We have been able to see the universe ever since the first human looked upwards to the skies. With the advent of the telescope in 1610 the process of using telescope to extend our sense of sight ever further into the Universe. Gravitational waves are different however. They are not at all like light or any of the other "electromagnetic radiations" such as radio waves, X-rays, infrared, ultraviolet rays. Instead, they are ‘ripples’ in the fabric of the universe (space and time itself!). These ripples could be interpreted as sounds, which are essentially oscillating waves (ripples) in the air. Researchers can even turn the simulated gravitational wave signals into audible sounds. Those sounds could then be translated into a piece of music, like a "gravitational wave symphony". The simulated gravitational wave signals are the oscillating sounds that increase in frequency until they abruptly stop with a ‘chirp’. These are the tell-tale signals for which gravitational wave astronomers search.
        The LIGO project ( Laser Interferometer Gravitational-Wave Observatory), which is searching for, and has found, gravitational waves, was co-established in Livingston, Louisiana in 1992 by MIT, Caltech, and numerous other universities. The National Science Foundation funds the project and gets contributions from other international scientific groups. It didn't detect anything from 2002 to 2010, and after a 5-year shutdown to upgrade to detectors, it came online in the fall of 2015 with four times the sensitivity than before the upgrade.
        As I said before, gravitational waves were predicted by Albert Einstein back in 1915 on the basis of his general theory of relativity. As they are also not possible in the Newtonian theory of gravitation (because that theory states that physical interactions propagate at infinite speed), this discovery disproves part of that theorem.
        Gravitational waves carry information about their dramatic origins and about the nature of gravity that cannot otherwise be obtained. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed. However, scientists at LIGO estimate that black holes involved in the event were 29 and 36 times the mass of the sun, and that the event took place 1.3 billion years ago. During the combination, forces equal to about 3 times the mass of the sun were converted in a fraction of a second into gravitational waves—with a peak power output about 50 times that of the whole visible universe. This all happens because (according to general relativity), as a pair of black holes orbit around each other, they lose energy through the emission of gravitational waves. That causes them to gradually approach each other over billions of years, and then much more quickly in the final minutes. During the final fraction of a second, the two black holes collide into each other at nearly one-half the speed of light and form a single more massive black hole, converting a portion of the combined black holes’ mass to energy, according to Einstein’s formula E=mc2. This energy is emitted as a final strong burst of gravitational waves. It is these gravitational waves that LIGO has observed. By looking at the time of arrival of the signals—the detector in Livingston recorded the event 7 milliseconds before the detector in Hanford—scientists can say that the source was located in the Southern Hemisphere.
        The ability to detect gravitational waves will not just be bring us new views of objects that we can already see, but will allow us to detect and study objects and events that are currently completely hidden from view. It also means that after years of research, hard work, and technological innovations, Einstein's theory is finally finally proven.



For more information see these links:
https://en.wikipedia.org/wiki/Gravitational_wave
https://en.wikipedia.org/wiki/LIGO
http://www.ligo.org/
https://www.ligo.caltech.edu/
http://www.nytimes.com/2016/02/12/science/ligo-gravitational-waves-black-holes-einstein.html?_r=0
http://www.nytimes.com/2015/11/24/science/a-century-ago-einsteins-theory-of-relativity-changed-everything.html
http://www.nasa.gov/feature/goddard/2016/nsf-s-ligo-has-detected-gravitational-waves
https://www.theguardian.com/science/2016/feb/11/gravitational-waves-discovery-hailed-as-breakthrough-of-the-century
https://www.theguardian.com/science/across-the-universe/live/2016/feb/11/gravitational-wave-announcement-latest-physics-einstein-ligo-black-holes-live
https://www.theguardian.com/science/2016/feb/09/gravitational-waves-everything-you-need-to-know
https://www.theguardian.com/science/across-the-universe/2016/feb/11/todays-gravitational-wave-announcement-could-be-two-great-discoveries-in-one

713