Saturday, February 27, 2016

The F.B.I. just did the impossible

        Tech companies aren't exactly known for playing nice with one another. Apple and Microsoft were once the fiercest rivals of personal computing. These days, Apple is up against Google in smartphones, while Google and Microsoft battle it out over business software. Amazon is fighting Apple when it comes to devices and media, and Microsoft when it comes to cloud hosting services (among other markets). Facebook is crushing Twitter in terms of social media users, and now Facebook and Google are vying for more mobile Internet eyeballs and ad dollars.
It's a rare day when all six of these companies can agree on something, but that day seems to have arrived, thanks to the U.S. Federal Bureau of Investigation.
        Last week, a judge ordered Apple to comply with an FBI request that Apple help circumvent security features on an iPhone used by a San Bernardino shooting suspect. Today, Apple filed a motion to dismiss the order to help unlock the iPhone for the FBI, and now Microsoft, Google, Facebook, Amazon, and Twitter all appear to be uniting behind Apple's move.
Google first showed tepid support for Apple in this fight last week, when CEO Sundar Pichai tweeted that "forcing companies to enable hacking could compromise users’ privacy."
        Then it seemed like Microsoft might be siding with the FBI, when co-founder and former CEO Bill Gates gave an interview to the The Financial Times earlier this week saying that the FBI was only looking for Apple's help in this specific case. Apple has repeatedly contended that what the FBI is asking for — a way to bypass the auto-deletion feature on an iPhone 5C when a password is guessed incorrectly too many times — could be used to compromise other iPhones, including the hundreds of millions used by Apple's customers around the world.
        But Gates, who is now just an "advisor" at Microsoft, later walked back those remarks, and today Microsoft's president and chief legal officer Brad Sims testified before Congress that his company "wholeheartedly" supports Apple and will be filing an amicus brief in the court case to that effect (amicus briefs are legal documents filed by parties who aren't directly involved in the case, but who have a strong interest in the outcome and may be affected by it.)
        As it turns out, Microsoft isn't the only one about to do this: now Google, Facebook, and Twitter, are all coming together to file a joint amicus brief in support of Apple, according to USA Today. Amazon is also said to be working on "amicus brief options," according to a spokesperson who spoke to Buzzfeed.
        Other smaller tech companies and digital advocacy groups including the Electronic Frontier Foundation have also said they plan to support Apple by filing such amicus briefs as the case moves forward.

466

Friday, February 26, 2016

The Next B-2...Looks exactly like the B-2

        Three days ago, the U.S. Air Force released the first image of its B-21, the supplement to the tiny B-2 fleet already in service. Formerly known as the Long Range Strike Bomber, or LRS-B, the new, Northrop Grumman designed plane is now the B-21. If that sounds at all familiar, it’s because America’s last brand-new shiny Northrop Grumman designed bomber was the B-2 Spirit. With the shroud lifted off the new bomber, we can see that the B-21 looks...almost exactly like its predecessor.
B-21 Concept Art
                                   B-21                                                                          B-2

As I mentioned before, the B-2 fleet is tiny. Alas, the B-2 Spirit was a victim of its time: a highly advanced bomber that entered service right as the Cold War ended. As American security concerns switched from fears of Russian attack to worries about the side effects of Russian economic implosion, a top-of-the-line stealth bomber became the easiest fat to cut off the Pentagon’s budget. After just 21 planes were delivered, the program ended, leaving America with a super-fancy flying machine to show off at parades. Oh well. So, the B-21 will pick up where the B-2 left off, making it the B-2 the "iPhone 5S of bombers" (if Apple only came out with a new iPhone every 25 years). According to the USAF press desk, "designation B-21 recognizes the LRS-B as the first bomber of the 21st century," and is not a reference to the just 21 B-2s that were made. Again, it will supplement the tiny B-2 fleet, and replace the ancient B-52s still used by the Air Force today, as well as the older B-1 bombers. People have speculated a lot on the kind of tech the bomber will have, as well as whether it will be unmanned or not. In late 2014, Popular Science spoke with a senior defense official at the Pentagon involved in the program, who insisted that, when carrying a nuclear weapon, the bomber will have a human crew on board. The B-21’s Ace Combat 2-style concept art seems to confirm that, with windows visible on the plane. This matches the ad Northrop Grumman aired during the Super Bowl this year, which put cockpits for human pilots on their future fighters.
In a previous post I talked about the future of America's air force and had a paragraph on the SR-72. I want to make sure that people do not confuse the two, as the B-21 will not be capable of the speeds the SR-72 will be flying at.

417

Thursday, February 25, 2016

The Mozart Effect and You should be playing an instrument!

       You've heard of the Mozart effect, right? If not, here's a little background information for ya.
In the '90s, there was a study done by Rauscher, Shaw, and Ky that pointed to the theory that Mozart's music had an effect on spatial reasoning. Although the study only showed that it made temporary increases in spatial reasoning the findings were peddled to the masses through books and CDs under the false pretense that his music made your baby permanently smarter.
       A report, published in the journal Pediatrics, said it was unclear whether the original study in 1993 has detected a "Mozart effect" or a potential benefit of music in general. But they said a previous study of adults with seizures found that compositions by Mozart, more so than other classical composers, appeared to lower seizure frequency. One team said it was possible that the proposed Mozart effect on the brain is related to the structure of his compositions, as Mozart's music tends to repeat the melodic line more frequently. In more condemning evidence, a team from Vienna University's Faculty of Psychology analyzed all studies since 1993 that have sought to reproduce the Mozart effect and found no proof of the phenomenon's existence. Overall, they looked at 3,000 individuals in 40 studies conducted around the world. It did not even need Bach to have the same effect: "Those who listened to music, Mozart or something else – Bach, Pearl Jam – had better results than the silent group. But we already knew people perform better if they have a stimulus," said Jakob Pietschnig, who led the study. "I recommend everyone listen to Mozart, but it's not going to improve cognitive abilities as some people hope," he added. In 1999, psychologist Christopher Chabris, now at Union College in Schenectady, N.Y., performed a meta-analysis on 16 studies related to the Mozart effect to survey its overall effectiveness. "The effect is only one and a half IQ points, and it's only confined to this paper-folding task," Chabris says. He notes that the improvement could simply be a result of the natural variability a person experiences between two test sittings. In almost every study done, there were little to no improvements on any level.
        However, playing an instrument is a different ball game. Instead of listening to music passively, Rauscher advocates putting an instrument into the hands of a youngster to raise intelligence. She cites a 1997 University of California, Los Angeles, study that found, among 25,000 students, those who had spent time involved in a musical pursuit tested higher on SATs and reading proficiency exams than those with no instruction in music. Other effects of playing an instrument are learning how to listen, increased memory capacity, sharper concentration, growth of perseverance,  improvement of non-verbal communication skills, an increased sense of responsibility, better coordination, stress relief, growth of creativity, better memory recall, organizational skills and time management, help with crowd anxiety, people skills, and growing of a social network within the people you play with. In other words, either learn how to play an instrument or have your children learn how to play an instrument.

https://en.wikipedia.org/wiki/Mozart_effect
http://www.bbc.com/future/story/20130107-can-mozart-boost-brainpower
http://www.telegraph.co.uk/news/health/children/11500314/Mozart-effect-can-classical-music-really-make-your-baby-smarter.html
http://www.scientificamerican.com/article/fact-or-fiction-babies-ex/

520

Saturday, February 20, 2016

Icecube: In the Antartic

        The IceCube Neutrino Observatory is a massive neutrino detector in Antarctica that takes advantage of the fact that the south pole is covered in a medium through which charged particles can travel faster than light: ice. A kilometer down, the ice is beautifully clear, allowing bursts of Cherenkov radiation to propagate through it unhindered. The IceCube observatory itself consists of 5,160 digital optical modules, each about 25 centimeters in diameter, suspended on 86 individual strings lowered into boreholes in the ice. Each string sits between 1450m and 2450m below the surface, spaced 125m horizontally from neighboring strings, resulting in a neutrino detector that's a full cubic kilometer in size. What IceCube is looking for are the tiny flashes of blue light emitted by the electrons, muons, and tau particles flashing through the ice after a neutrino collides with a water molecule. These flashes are very dim, but there are no other sources of light that far down under the ice, and the photomultiplier tube inside each digital optical module can detect even just a handful of photons.
        Depending on what kind of subatomic particle the neutrino turns into, IceCube will detect different Cherenkov radiation patterns. An electron neutrino will produce an electromagnetic shower (or cascade) of particles. The muon produced by a muon neutrino, on the other hand, can travel hundreds of meters, leaving a track that points back along the same trajectory as the muon neutrino that created it. A tau neutrino will produce a sort of combination of these two signatures. Maybe. I think. Tau neutrinos are difficult to detect, because tau particles themselves are extraordinarily massive and short lived: they're something like 3,500 times the mass of an electron (and 17 times the mass of a muon), with a lifetime of just 0.0000000000003 second, which means that they decay into other subatomic particles virtually instantaneously and are easily mistaken for electron neutrinos. IceCube has some ideas of what unique radiation signatures might suggest the detection of a tau (including the "double bang," the "inverted lollipop," and the "sugardaddy"), but they haven't found one yet.

350

Friday, February 19, 2016

Detections of Cosmic Neutrinos

       Neutrinos are elementary particles (like quarks, photons, and the Higgs boson) that have no charge and virtually no mass. Since they're small, fast, and charge-free, they aren't affected by nuisances like electromagnetic fields, meaning that they can pass unmolested through rather a lot of pretty much anything. Some 65 billion solar neutrinos just passed through every square centimeter of your body, and if you wait a second, 65 billion more of them will do it again. The only way to bring a neutrino to a halt is if it runs smack into an electron or the nucleus of an atom, but this is ridiculously improbable: you'd need a piece of lead about a light year long to be reasonably sure catching any one specific neutrino. Fortunately, the enormous number of neutrinos that are flying through everything all the time compensates for the low probability of collision, and that has allowed us to learn some things about these elusive particles.
        We’re pretty sure that neutrinos come in three different (but equally tasty) flavors: electron, muon, and tau. Each flavor has a slightly different (but tiny) mass, on the order of a million times smaller than the mass of a single electron, and a neutrino can oscillate between these three flavors as it zips along. The original flavor that each neutrino takes depends on how it was created: most often, neutrinos are created through high energy nuclear processes like you'd find going on inside stars. To take one common example, protons and neutrons colliding with each other create pions, which are subatomic particles that decay into a mix of muon and electron neutrinos.
        The most common source for the neutrinos that we see here on Earth is the sun, which produces an electron neutrino every time two protons fuse into deuterium. (This happens a lot.) What's much rarer to see are neutrinos that aren't produced close to home—neutrinos that come from outside of our solar system, and even outside of our galaxy. These are called cosmic neutrinos, or astrophysical neutrinos.
        Cosmic Neutrinos
Cosmic neutrinos are born, we think, in the same sorts of ultra high energy events out in the universe that also generate gamma rays and cosmic rays. We're talking events like supernova remnant shocks, active galactic nuclei jets, and gamma-ray bursts, which can emit as much energy in a few seconds as our sun does over ten billion years. As you might expect, the resulting neutrinos have stupendously high energies themselves: a million billion electronvolts (1 petaelectronvolt) or so. That works out to be about a tenth of a millijoule, which is a lot for a particle that has an effective size of nearly zero, and it is about equivalent to the kinetic energy of a thousand flying mosquitoes, in case that horrific unit of measurement is of any help to you.
        The reason that cosmic neutrinos are important is the same reason that neutrinos themselves are so frustrating to measure: they ignore almost everything. This allows them to carry valuable information across the entire universe. What sort of information can be carried by a particle that we can't even measure directly? Here are three examples:
        Since neutrinos aren't affected all that much by even the densest matter, they can escape from the core of a supernova well before the shock wave from the inside of the collapsing star makes it to the outside and releases any photons. By detecting this initial neutrino burst, we can get a warning of as long as a few hours before a supernova would be visible from Earth.
Since neutrinos aren't affected at all by magnetic fields and therefore travel in straight lines, they can help us pinpoint the origin of ultra high energy cosmic rays, which are affected by magnetic fields and therefore can follow winding paths. We know that some cosmic rays come from supernovae, but many of them don't, and we're not sure where the rest of them originate. With energies over a million times greater than the Large Hadron Collider, it would be nice to know where they come from.
The ratio between different flavors of neutrinos may suggest how they were formed. For neutrinos produced by pion decay, we'd expect to see two electron neutrinos for every muon neutrino. If we see different ratios, it would suggest a different formation environment, and particularly weird ratios could even lead to new physics.
        Detecting Neutrinos
Since neutrinos don't really interact with anything, there's no reliable way to detect them directly. Whether they're coming from the sun, the atmosphere, or somewhere more exotic, the best we can do is try and spot the aftermath of a very unlucky neutrino smashing headlong into something while we watch.
        When a neutrino does smash into something, one of two different things can happen. If the neutrino isn't super energetic, it might just bounce off in a new direction, passing on some of its momentum and energy into whatever it hits (which recoils in response) and occasionally causing that thing to break into pieces. Some neutrino detectors are designed to watch for both of these effects. The other thing that can happen is that the neutrino obliterates itself, dissolving into a subatomic particle that depends on the neutrino's flavor-of-the-moment: an electron neutrino turns into an electron, a muon neutrino turns into a muon, and a tau neutrino turns into a tau particle, stripping electric charge off of whatever it hits as it does so. Some detectors look for this change in charge of the thing that the neutrino ran into (it's the only way of detecting neutrinos with energies less than 1 MeV), but the detector that we're interested in looks for traces of the ex-neutrino's subatomic particle itself.
        Detecting particles moving at very high speeds is a (relatively) straightforward thing, at least conceptually. The key is what's called Cherenkov radiation, which is created by charged particles moving through a medium faster than the speed of light in that medium. It's sort of like the sonic boom created by an object travelling through air faster than the speed of sound, except with with light. Here's what the Cherenkov radiation created by a burst of beta particles from a nuclear reactor being pulsed looks like:

So now that we've got some lovely blue glowy-ness to look for, all we need a medium through which charged particles can travel faster than light, and some kind of detection system that can spot the resulting Cherenkov radiation.

Continued in next post...

1081

Thursday, February 18, 2016

Cheap paper skin mimics the real thing

    Human skin’s natural ability to feel sensations such as touch and temperature difference is not easily replicated with artificial materials in the research lab. That challenge did not stop a Saudi Arabian research team from using cheap household items to make a “paper skin” that mimics many sensory functions of human skin. The artificial skin may represent the first single sensing platform capable of simultaneously measuring pressure, touch, proximity, temperature, humidity, flow, and pH levels. Previously, researchers have tried using exotic materials such as carbon nanotubes or silver nanoparticles to create sensors capable of measuring just a few of those things. By comparison, the team at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia used common off-the-shelf materials such as paper sticky notes, sponges, napkins and aluminum foil. Total material cost for a paper skin patch 6.5 centimeters on each side came to just $1.67.
        "Its impact is beyond low cost: simplicity," says Muhammad Mustafa Hussain, an electrical engineer at KAUST. “My vision is to make electronics simple to understand and easy to assemble so that ordinary people can participate in innovation.” The paper skin’s low cost and wide array of capabilities could have a huge impact on many technologies. Flexible and wearable electronics for monitoring human health and fitness could become both cheaper and more widely available. New human-computer interfaces—similar to today’s motion-sensing or touchpad devices—could emerge based on the paper skin’s ability to sense pressure, touch, heat, and motion. The paper skin could also become a cheap sensor for monitoring food quality or outdoor environments.
        Last but not least, cheap artificial skin could give robots the capability to feel their environment in the same way that humans do, Hussain says. In a paper detailing the research—published in the 19 February issue of the journal Advanced Materials Technologies—the researchers said:
       "The envisioned applications of such artificial skin takes a lot of surface area coverage (like                 robotic skins or skins for robots). There, lowering cost is crucial while not compromising                     performance. In that sense, if mechanical ruggedness can be proved, there is no scientific or                 technical reason for not accepting paper skin as a viable option."
The team’s low-cost approach often seems as simple as a classroom experiment. As an example, researchers built a pressure sensor by sandwiching a napkin or sponge between two metal contacts made from aluminum foil. The same simple device could also detect touch and flow based on changes in pressure. Its aluminum foil even allowed it to act as a proximity sensor for electromagnetic fields with a detection range of 13 centimeters.

431

Sunday, February 14, 2016

Intel's Core-M Processor - what you need to know

        Intel's new Core-M processor is at the heart of a laptop revolution - and a new wave of fanless computers. Running at only 4.5 watts, compared to the 11.5 watts of an i5 processor or 57 watts of a quad-core i7, this processor doesn't require a fan cooled heat sink. With its low power consumption and low heat generation, manufacturers can build laptops that are thinner than we’ve ever seen before. Going fanless allows manufacturers to build thinner devices that make less noise. For example, the latest MacBook half an inch thick; the second-generation Lenovo ThinkPad Helix is just .38 inches thick, compared to its .46-inch, Core i5-powered predecessor. Where the original Helix's keyboard dock had a hinge with dual fans built-in, the new Ultrabook Keyboard doesn't even have one fan. Last year's model lasted just 5 hours and 48 minutes when detached from its dock, but the Lenovo promises 8 hours of endurance from the Core M-powered Helix.
       Though Core M is the first processor based on Intel's new 14nm Broadwell architecture, it certainly won't be the last. In 2015, Intel plans to use Broadwell as the basis of its next-generation Core i3 / i5 / i7 chips for both laptops and desktops. Over the past few years, Intel has released a new processor architecture on an annual basis, with a die shrink occurring every other generation. The current Haswell and prior-gen Ivy Bridge architecture use a 22nm process while Broadwell will be the first to use 14nm. A smaller die size means that Intel can fit more transistors into a smaller space, using less power, generating less heat and taking up less space in the chassis. The Core M processor package eats up just 495 square millimeters of space, about half the size of the 960 square-millimeter 4th Generation Core Series package.

301

Saturday, February 13, 2016

Fusion - Germany's Wendelstein 7-X

       Yesterday, one of the grandest experimental fusion reactors in the world flared to life, converting hydrogen into a plasma for less than a second. The honor of pressing the button went to a PhD in Quantum Chemistry (who also happens to be the Chancellor of Germany), Angela Merkel.
Why such a high-profile ribbon-cutting? Fusion power is a kind of nuclear power source, the same thing that happens on a much larger scale in the hearts of stars. Theoretically, if you could get light atoms to fuse into heavier atoms, the energy produced by the reaction (which happens at immense temperatures and pressures) would be a clean source of energy that could continue almost indefinitely, without the radiation byproducts of nuclear fission (the method currently employed at nuclear power plants).
        The German experiment, called Wendelstein 7-X, received funding or components from Germany, Poland, and the United States. This is the first run with hydrogen, though it did some initial work creating helium plasma last year. Though the hydrogen plasma was short-lived, it was an exciting moment for researchers. “With a temperature of 80 million degrees Celsius and a lifetime of a quarter of a second, the device’s first hydrogen plasma has completely lived up to our expectations”, Hans-Stephan Bosch, head of operations for Wendelstein 7-X said. The Wendelstein 7-X is not designed to produce energy. Instead, the experiment is focused solely on producing and maintaining a levitating ball of super-heated plasma, which is a key step towards fusion energy.
The Germans aren't the only ones working on fusion, though. In France, the largest fusion reactor ever made, called ITER, is under construction. Private companies are in on the race too, with Lockheed Martin also working on a fusion reactor design.
        While they are both meant to achieve a similar goal, there are differences among the designs used by the various groups. One of the more popular designs is the tokamak, a Russian design used by ITER. It uses a doughnut-shaped machine to generate a magnetic field to contain the hot plasma. The Wendelstein 7-X, on the other hand, is a stellarator. While it is also a doughnut shape, it has the distinct advantage of theoretically being able to run indefinitely, instead of in pulses like tokamaks. If the Wendelstein 7-X succeeds in producing plasma for long amounts of time, (they hope to get up to 30 minutes by 2025 if not earlier) then it might show that the stellarator design could be used in future fusion power plants.

417

Thursday, February 11, 2016

Breaking News: Gravitational Waves - never seen before - "the breakthrough of the century"

        Today, scientific history was made. At 3:30pm today, the National Science Foundation had a press conference to "update the scientific community on efforts to detect” gravitational waves. They reported that, for the first time, scientists have observed these gravatational ripples in the fabric of spacetime, arriving at the earth from a cataclysmic event in the distant universe. This discovery confirms a major prediction of Albert Einstein’s 1915 general theory of relativity. It also opens up an unprecedented new view on the cosmos.
        We have been able to see the universe ever since the first human looked upwards to the skies. With the advent of the telescope in 1610 the process of using telescope to extend our sense of sight ever further into the Universe. Gravitational waves are different however. They are not at all like light or any of the other "electromagnetic radiations" such as radio waves, X-rays, infrared, ultraviolet rays. Instead, they are ‘ripples’ in the fabric of the universe (space and time itself!). These ripples could be interpreted as sounds, which are essentially oscillating waves (ripples) in the air. Researchers can even turn the simulated gravitational wave signals into audible sounds. Those sounds could then be translated into a piece of music, like a "gravitational wave symphony". The simulated gravitational wave signals are the oscillating sounds that increase in frequency until they abruptly stop with a ‘chirp’. These are the tell-tale signals for which gravitational wave astronomers search.
        The LIGO project ( Laser Interferometer Gravitational-Wave Observatory), which is searching for, and has found, gravitational waves, was co-established in Livingston, Louisiana in 1992 by MIT, Caltech, and numerous other universities. The National Science Foundation funds the project and gets contributions from other international scientific groups. It didn't detect anything from 2002 to 2010, and after a 5-year shutdown to upgrade to detectors, it came online in the fall of 2015 with four times the sensitivity than before the upgrade.
        As I said before, gravitational waves were predicted by Albert Einstein back in 1915 on the basis of his general theory of relativity. As they are also not possible in the Newtonian theory of gravitation (because that theory states that physical interactions propagate at infinite speed), this discovery disproves part of that theorem.
        Gravitational waves carry information about their dramatic origins and about the nature of gravity that cannot otherwise be obtained. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed. However, scientists at LIGO estimate that black holes involved in the event were 29 and 36 times the mass of the sun, and that the event took place 1.3 billion years ago. During the combination, forces equal to about 3 times the mass of the sun were converted in a fraction of a second into gravitational waves—with a peak power output about 50 times that of the whole visible universe. This all happens because (according to general relativity), as a pair of black holes orbit around each other, they lose energy through the emission of gravitational waves. That causes them to gradually approach each other over billions of years, and then much more quickly in the final minutes. During the final fraction of a second, the two black holes collide into each other at nearly one-half the speed of light and form a single more massive black hole, converting a portion of the combined black holes’ mass to energy, according to Einstein’s formula E=mc2. This energy is emitted as a final strong burst of gravitational waves. It is these gravitational waves that LIGO has observed. By looking at the time of arrival of the signals—the detector in Livingston recorded the event 7 milliseconds before the detector in Hanford—scientists can say that the source was located in the Southern Hemisphere.
        The ability to detect gravitational waves will not just be bring us new views of objects that we can already see, but will allow us to detect and study objects and events that are currently completely hidden from view. It also means that after years of research, hard work, and technological innovations, Einstein's theory is finally finally proven.



For more information see these links:
https://en.wikipedia.org/wiki/Gravitational_wave
https://en.wikipedia.org/wiki/LIGO
http://www.ligo.org/
https://www.ligo.caltech.edu/
http://www.nytimes.com/2016/02/12/science/ligo-gravitational-waves-black-holes-einstein.html?_r=0
http://www.nytimes.com/2015/11/24/science/a-century-ago-einsteins-theory-of-relativity-changed-everything.html
http://www.nasa.gov/feature/goddard/2016/nsf-s-ligo-has-detected-gravitational-waves
https://www.theguardian.com/science/2016/feb/11/gravitational-waves-discovery-hailed-as-breakthrough-of-the-century
https://www.theguardian.com/science/across-the-universe/live/2016/feb/11/gravitational-wave-announcement-latest-physics-einstein-ligo-black-holes-live
https://www.theguardian.com/science/2016/feb/09/gravitational-waves-everything-you-need-to-know
https://www.theguardian.com/science/across-the-universe/2016/feb/11/todays-gravitational-wave-announcement-could-be-two-great-discoveries-in-one

713

Saturday, February 6, 2016

Super-large space telescopes

        Throughout history, there have been four things that have determined just how much information we can glean about the Universe through astronomy:

1. The size of your telescope, which determines both how much light you can gather in a given amount of time and also your resolution.
2. The quality of your optical systems and cameras/CCDs, which allow you to maximize the amount of light that becomes usable data.
3. The “seeing” through the telescope, which can be distorted by the atmosphere but minimized by high altitudes, still air, cloudless nights and adaptive optics technology.
4. And your techniques of data analysis, which can ideally make the most of every single photon of light that comes through.

        There have been tremendous advances in ground-based astronomy over the past 25 years, but they’ve occurred almost exclusively through improvements of numbers 2 and 4. The largest telescope in the world in 1990 was the Keck 10-meter telescope, and while there are a number of 8-to-10 meter class telescopes today, 10 meters is still the largest class of telescopes in existence. Moreover, we’ve really reached the limits of what improvements in those areas can achieve without going to larger apertures. This isn’t intended to minimize the gains in these other areas; they’ve been tremendous. But it’s important to realize how far we’ve come. The charge-coupled devices (CCDs) that are mounted to telescopes can focus on either wide-field or very narrow areas of the sky, gathering all the photons in a particular band over the entire field-of-view or performing spectroscopy — breaking up the light into its individual wavelengths — for up to hundreds of objects at once. We can cram more megapixels into a given surface area. Quite simply, we’re at the point where practically every photon that comes in through a telescope’s mirror of the right wavelength can be utilized, and where we can observe for longer and longer periods of time to go deeper and deeper into the Universe if we have to.

        And finally, computational power and data analysis technique have improved tremendously, where more useful information can be recorded and extracted from the same data that we can take. These are tremendous advances, but just like a generation ago, we’re still using the same size telescopes. If we want to go deeper into the Universe, to higher resolution, and to greater sensitivities, we have to go to larger apertures: we need a bigger telescope. There are currently three major projects that are competing to be first: the Thirty-Meter Telescope atop Mauna Kea, the (39 meter) European Extremely Large Telescope in Chile, and the (25 meter) Giant Magellan Telescope (GMT), also in Chile. These represent the next giant leap forward in ground based astronomy, and the Giant Magellan Telescope is probably going to be first, having broken ground at the end of last year and with early operations planned to begin in just 2021, and becoming fully operational by 2025.


491

Friday, February 5, 2016

Concussions, A Disastrous Physical and Biological Hybrid, and what we're doing to stop them

        During Sunday’s Super Bowl, the showdown between the Denver Broncos and the Carolina Panthers, the competition is sure be fierce. There will be an estimated 130-plus plays consisting of hundreds of hits, tackles, spears, and lay outs. That translates into at least minor head trauma for most players.
        In 2007, three former NFL players committed suicide. Andre Waters, 44, shot himself in the head after bouts with depression. Terry Long drank a bottle of antifreeze at 45. Thirty-six-year-old Justin Strzelczyk heard voices and died in a crash while fleeing police. Every autopsy was performed by physician Bennet Omalu, and every one had signs of brain damage. Omalu, a former neuropathologist at the University of Pittsburgh School of Medicine, was the first to pinpoint forensic evidence of football-induced chronic traumatic encephalopathy (CTE), a variation of "boxer's dementia." Symptoms include confusion, mood disorders, slurred speech and memory loss. This caused heavy pressure on the NFL to address the problem. A new committee was formed, called the NFL Head, Neck, and Spine committee. This new committee gave an actual neurological insight into these traumatic injuries. Evan Breedlove, a mechanical engineering graduate student at Purdue University who studies neurotrauma in Indiana high-school football players, says that hitting a player in the head is like shaking a Jell-o mold on a platter. The brain shakes, and little splits called microhemorrhages can form. The splits also allow fluid in, which increases the likelihood of further concussions. "It's generally a bad thing when the brain gets exposed to the chemistry in the rest of your body," Breedlove says. The average NFL player sustains as many as 1,500 hits to the head throughout a season. It's the accumulation of impact after impact that does real damage. "The big hit may just be the straw that breaks the camel's back," he says.  
        According to the NFL, there were 271 documented game-related concussions this past season — the most recorded by the league since 2011. Roughly one-third of those were caused by helmet-to-helmet contact. One of the worst of those hits occurred in January, during a grinding back-and-forth playoff match between Cincinnati Bengals and the Pittsburgh Steelers, a game generally regarded as one of the season’s dirtiest. The number of concussions in the NFL has increased by at least 20 percent each season for the past three years. The rate of concussions among high-school and college players (where they go unreported) is probably much higher.
        Many companies, most distinctly Riddell, have developed new helmets to help prevent concussions. During the 2011 season, NFL officials introduced "smart helmets" and mouthguards outfitted with accelerometers and radio-frequency identification to measure the location and direction of hits experienced during a game or practice. The data is wirelessly transmitted to a computer on the sidelines, which calculates the magnitude of the hit and the location of the blow. In a two-year pilot program with high-school and college players, the system gathered data on more than 1.5 million head impacts. Researchers at Riddell, the helmet's manufacturer, set a concussion threshold of 98 Gs per game—more than that, and they recommend benching the player. Each helmet costs about $1,000. A less-expensive helmet from Riddell, prepared for the 2012 season for high-school and college players, used a thin film to gauge hits. On impact, pressure compresses the film, which generates a charge, sending a signal to a handheld unit on the sideline. In 2010, Kevin Guskiewicz of the NFL's Head, Neck and Spine Committee said he would like to see "every player geared up with some kind of monitoring".

591

Thursday, February 4, 2016

Nanites: The Future of Tech

       In 1959, renowned physicist Richard Feynman talked about what was soon to be called nanotechnology. But it wasn't until 1986 that the ideas of atomic-sized structures and manipulation of things at that level surfaced again, this time in a book, Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Since then, nanotechnology has been nothing more than a sci-fi fantasy, where they could do amazing things such as heal wounds, build massive structures in time frames impossible for humans, and cure diseases from the body. That has been the case until 2005, when Rice University was able to build a "nanocar" out of fullerene molecules built on a chassis of carbon compounds. The precision required to this was great, as this car was 20000 times thinner than a human hair.
        However, the invention of actual working nanites would require a surge in either processing power or miniaturization, because to build a robot on a nano scale would require very tiny electronics to allow the robot to work. Not to mention weather-proofing the robot for missions inside the body and in adverse climates.
        Only time will tell.

213