#SearCh<3Bar#

Google

Thursday, August 2, 2007

What We Don't Know

What’s at Earth’s Core?
We know that at the center of the planet, about 4,000 miles down, sits a solid ball of iron the size of the moon. We also know that we’re standing on about 1,800 miles of rock, which forms Earth’s crust and mantle. But what’s in between the mantle and the iron ball? A churning ocean of liquid of some sort, but scientists aren’t certain what it’s made of or how it reacts to the stuff around it.
We’re confident there’s a lot of iron in this ocean. But what else? Based on what researchers understand about the pressure, temperature, and density of materials down there, some maintain that the core also contains lots of hydrogen and sulfur. Raymond Jeanloz of UC Berkeley believes that another component is oxygen, which comes from rocks in the part of the mantle that borders the liquid core.
Knowing more about the molten concoction would give scientists clues about how Earth formed and how heat and convection affect plate tectonics. More information could help solve another mystery, too: whether, as many researchers suspect, the inner core is growing. If so, it could eventually overtake the molten metal surrounding it, throwing off Earth’s magnetic field.

Is time an illusion?
Plato argued that time is constant - it’s life that’s the illusion. Galileo shrugged over the philos-ophy of time and figured out how to plot it on a graph so he could get on with the important physics. Albert Einstein said that time is just another dimension, a fourth one to go along with the up-down, side-side, forward-back we move through every day. Our understanding of time, Einstein said, is based on its relationship to our environment. Weirdly, the faster you travel, the slower time moves. The most radical interpretation of his theory: Past, present, and future are merely figments of our imagination, constructs built by our brains so that everything doesn’t seem to happen at once.

Einstein’s conception of unified spacetime works better on graph paper than in the real world. Time isn’t like those other dimensions - for one thing, we move only one way within it. “What’s needed is not to make the notion of time and general relativity work or to go back to the notion of absolute time, but to invent something radically new,” says Lee Smolin, a physicist at the Perimeter Institute in Waterloo, Ontario. Somebody is going to get it right eventually. It’ll just take time.



How does a fertilized egg become a human?
Imagine that you place a 1-inch-wide black cube in an empty field. Suddenly the cube makes copies of itself - two, four, eight, 16. The proliferating cubes begin to form structures - enclosures, arches, walls, tubes. Some of the tubes turn into wires, PVC pipes, structural steel, wooden studs. Sheets of cubes become wallboard and wood paneling, carpet and plate-glass windows. The wires begin connecting themselves into a network of immense complexity. Eventually, a 100-story skyscraper stands in the field.

That’s basically the process a fertilized cell undergoes beginning with the moment of conception. How did that cube know how to make a skyscraper? How does a cell know how to make a human (or any other mammal)? Biologists used to think that the cellular proteins somehow carried the instructions. But now proteins look more like pieces of brick and stone - useless without a building plan and a mason. The instructions for how to build an organism must be written in a cell’s DNA, but no one has figured out exactly how to read them.


What happened to the Neanderthals?
They were our cousins, our hominid cousins. They looked like us, they walked like us, they may have even thought like us. So why did the Neanderthals disappear, while we Homo sapiens dug in and stayed?

Ever since the first Neanderthal bones were discovered 150 years ago in Germany’s Neander Valley, paleoanthropologists have sought to understand what could possibly have destroyed the once-thriving and widely dispersed species of prehistoric human. By most measures, the Neanderthals were the equal of our direct ancestors, the fully modern out-of-Africa characters often called Cro-Magnons, with whom the Neanderthals coexisted for thousands of years. Like our forebears, Neanderthals were supple sojourners, happily colonizing parts of Africa, Europe, and Asia. They stood upright, skillfully sculpted and wielded stone tools, and buried their dead with pomp and hope. They were slightly larger and more muscular than their Cro-Magnon counterparts, and their brains were bigger, too. Yet by about 30,000 years ago, the Neanderthals had vanished, leaving Cro-Magnons as the sole survivors of the tangled Hominina tribe. Moreover, while Neanderthals may well have been capable of interbreeding with Cro-Magnons, recent DNA analysis has revealed no signs that such Stone Age Capulet-Montague mergers occurred.

Some scientists have attributed the Neanderthals’ demise to chronic disease, pointing out that many Neanderthal skeletal remains show signs of arthritis and other bone disorders. Other people have wondered whether genocide was to blame. Perhaps the Cro-Magnons systematically exterminated their competitors, just as chimpanzees have been observed hunting down and killing every last member of a neighboring chimp troupe.

Another, more recent, hypothesis is that Homo sapiens outcompeted Homo neanderthalensis because of a difference in their economic systems. Reporting in the December issue of Current Anthropology, Steven Kuhn and Mary Stiner of the University of Arizona wrote that the archaeological record suggests all Neanderthals - male, female, adult, child - focused their efforts on “obtaining large terrestrial game.” In other words, they were all hunters. The Cro-Magnons, by contrast, appear to have divided labor along more or less sexual lines, with men doing most of the big-game killing, women and children gathering tubers and other plant foods, and everybody sharing the flesh and fruits of their efforts. By adopting this sort of specialization of labor, the researchers speculate, Homo sapiens likely proved more efficient and flexible than Neanderthals and were able to expand their population more rapidly.

In other words, at least according to this new theory put forward by Stiner and Kuhn, the Neanderthals weren’t felled by a pathogen or a primordial Slobodan Milosevic. They were done in by the bedrock family values of Fred and Wilma Flintstone.


Why do we sleep?
It’s a catchy phrase: You snooze, you lose. But cutting out those 40 winks would be a bad idea. All mammals sleep, and if they’re deprived of shut-eye they die [PDF] - faster than if they’re denied food. But no one really knows why.

Obviously, sleep rests the body. But watching TV does that, too. The answer must lie in the noggin. One leading theory says that while we’re awake, a substance builds up in the brain (or gets depleted) and sleep removes (or replenishes) it. That makes sense. For part of the night, the brain idles in an energy-conserving state called slow-wave sleep. Freed from the duties of consciousness, it can focus on cleanup.

The problem with this idea is that another portion of each night, about a quarter, is given to REM sleep [PDF], during which the brain is anything but idle. REM stands for rapid eye movement, and it corresponds with vivid dreams, suggesting that it plays a role in consolidating memories. But there’s probably more to it: Though antidepressants suppress REM sleep, patients taking them suffer no memory impairment.

In any case, it’s clear that pillow time serves a critical purpose. Bad things - like some 100,000 traffic accidents a year, not to mention uncounted instances of calling your spouse by your ex’s name - happen when we don’t get enough z’s. At some point, someone’s going to have to dream up a reason.

Where did life come from?
Natural selection explains how organisms that already exist evolve in response to changes in their environment. But Darwin’s theory is silent on how organisms came into being in the first place, which he considered a deep mystery. What creates life out of the inanimate compounds that make up living things? No one knows. How were the first organisms assembled? Nature hasn’t given us the slightest hint.

If anything, the mystery has deepened over time. After all, if life began unaided under primordial conditions in a natural system containing zero knowledge, then it should be possible - it should be easy - to create life in a laboratory today. But determined attempts have failed. International fame, a likely Nobel Prize, and $1 million from the Gene Emergence Project await the researcher who makes life on a lab bench. Still, no one has come close.

Experiments have created some basic materials of life. Famously, in 1952 Harold Urey and Stanley Miller mixed the elements thought to exist in Earth’s primordial atmosphere, exposed them to electricity to simulate lightning, and found that amino acids self-assembled in the researchers’ test tubes. Amino acids are essential to life. But the ones in the 1952 experiment did not come to life. Building-block compounds have been shown to result from many natural processes; they even float in huge clouds in space. But no test has given any indication of how they begin to live - or how, in early tentative forms, they could have resisted being frozen or fried by Earth’s harsh prehistoric conditions.

Some researchers have backed the hypothesis that an unknown primordial “soup” of naturally occurring chemicals was able to self-organize and become animate through a natural mechanism that no longer exists. Some advance the “RNA first” idea, which holds that RNA formed and lived on its own before DNA - but that doesn’t explain where the RNA came from. Others suppose life began around hot deep-sea vents, where very high temperatures and pressures cause a chemical maelstrom. Still others have proposed that some as-yet-unknown natural law causes complexity - and that when this natural law is discovered, the origin of life will become imaginable.

Did God or some other higher being create life? Did it begin on another world, to be transported later to ours? Until such time as a wholly natural origin of life is found, these questions have power. We’re improbable, we’re here, and we have no idea why. Or how.



How can observation affect the outcome of an experiment?
Paging Captain Obvious: To perform a legitimate experiment, scientists must observe the results of a system in motion without influencing those results. Turns out that’s harder than it sounds. In 1927, German physicist Werner Heisenberg discovered that in the Wonderland-like subatomic realm, it is impossible to measure position and momentum simultaneously. “In an attempt to observe an electron or other subatomic particle using light, very short wavelengths of light are required,” says David Cassidy, a science historian and Heisenberg expert at Hofstra University. “But when that light hits the electron, it knocks it all over the place like a billiard ball.” This can become a serious issue when you’re working with the kind of focused, high-intensity beams found in, say, particle accelerators. “The more precise the momentum of the beam particles,” Cassidy says, “the more difficult it becomes to focus the beam.”

The real problem, though, is what this so-called observer effect does to reality. Do an experiment to find the fundamental unit of light and you find particles called photons. But change the conditions of the experiment and you get waves. Physicists have no problem with the cognitive dissonance of this “wave-particle duality.” But... so... what’s light made out of, really? The dichotomy raises the mind-boggling prospect that unless we observe an event or thing, it hasn’t really happened, that all possible futures are quantum probability functions waiting for someone to notice them - trees falling unheard in a forest. Maybe this article wasn’t even here until you turned to this page.



How do entangled particles communicate?
One of the zanier notions in the plenty zany world of quantum mechanics is that a pair of subatomic particles can sometimes become “entangled.” This means the fate of one instantly affects the other, no matter how far apart they are. It’s such a bizarre phenomenon that Einstein dissed the idea in the 1930s as “spooky action at a distance,” saying it showed that the developing model of the atomic world needed rethinking.

But it turns out that the universe is spooky after all. In 1997, scientists separated a pair of entangled photons by shooting them through fiber-optic cables to two villages 6 miles apart. Tipping one into a particular quantum state forced the other into the opposite state less than five-trillionths of a second later, or nearly 7 million times faster than light could travel between the two. Of course, according to relativity, nothing travels faster than the speed of light - not even information between particles.

Even the best theories to explain how entanglement gets around this problem seem preposterous. One, for example, speculates that signals are shot back through time. Ultimately, the answer is bound to be unnerving: According to a famous doctrine called Bell’s Inequality, for entanglement to square with relativity, either we have no free will or reality is an illusion. Some choice.

Why do placebos work?
Tor Wager makes his living inflicting pain. As a psychologist at Columbia University, he zaps people with brief electric surges in order to study the placebo effect, one of the most mysterious phenomena in modern medicine. In one recent experiment, Wager and a group of colleagues delivered harsh shocks to the wrists of 24 test subjects. Then the researchers rubbed an inert cream on the subjects’ wrists but told them it contained an analgesic. When the scientists delivered the next set of shocks, eight of the subjects reported experiencing significantly less pain.

The idea that an innocuous lotion could ease the agony of an electric shock seems remarkable. Yet placebos can be as powerful as the best modern medicine. Studies show that between 30 and 40 percent of patients report feeling better after taking dummy pills for conditions ranging from depression to high blood pressure to Parkinson’s. Even sham surgery can work marvels. In a recent study, doctors at Houston’s Veterans Affairs Medical Center performed arthroscopic knee surgery on one group of patients with arthritis, scraping and rinsing their knee joints. On another group, the doctors made small cuts in the patients’ knees to mimic the incisions of a real operation and then bandaged them up. The pain relief reported by the two groups was identical. “As far as I know, the placebo effect has never raised the dead,” says Howard Brody, a professor at the University of Texas Medical Branch and author of a book on the subject. “But the vast majority of medical conditions respond to placebo at least to some degree.”

How do placebos have such an effect? Nobody knows. Studies have shown that our brains can release chemicals that mimic the activity of morphine when we’re treated with placebo analgesics. But only lately have researchers begun to pin down the underlying physiological mechanisms. In his groundbreaking electrical-shock experiment, Wager used functional MRI to examine images of the brain activity of his subjects. When a person knew a painful stimulus was imminent, the brain lit up in the prefrontal cortex, the region used for high-level thinking. When the researchers applied the placebo cream, the prefrontal cortex lit up even brighter, suggesting the subject might be anticipating relief. Then, when the shock came, patients showed decreased activity in areas of the brain where many pain-sensitive neurons lead.

One day, this sort of research could point toward new treatments that harness the mind to help the body. Until then, doctors are divided on the ethics of knowingly prescribing placebos. Some think it’s shady to perform mock surgery or offer a patient pills that contain no active ingredients. Yet the best doctors have always employed one form of placebo: Studies show that empathy from an authoritative yet caring physician can be deeply therapeutic. Maybe handing out the occasional sugar pill isn’t such a bad idea.



What is the universe made of?
Astronomers scouring the heavens with powerful telescopes can see objects that are billions of trillions of miles away. These observations have proven essential to piecing together a fairly refined picture of the history and evolution of the cosmos. Nevertheless, a gaping hole remains in our understanding of a basic question: What is the universe made of? For more than 100 years we’ve known about atoms, and over the past century or so we’ve gone further and identified atomic constituents like electrons and quarks, as well as their exotic cousins - neutrinos, muons, and the like. But there is now convincing evidence that these ingredients are a cosmic afterthought. Current data shows that if you weighed everything in existence, these familiar particles would amount to about 5 percent of the total. Most of the universe is composed of other stuff, which, with all of science’s deep insights, we’ve yet to identify.

How do we know this? Well, over the course of many decades, astronomers studied the motion of galaxies and the stars within them, and found that the gravity exerted by this luminous matter was insufficient to account for the way these heavenly bodies moved. Only by positing large amounts of additional matter that doesn’t give off light (visible, x-ray, infrared, or any other kind) and is thus invisible to telescopes, could the data be explained. Through detailed cosmological measurements, scientists also discovered that this so-called dark matter couldn’t be made of the same electrons, protons, and neutrons that make up everything with which we are familiar.

Then, in the late 1990s, two groups of astronomers, one led by Saul Perlmutter of the Lawrence Berkeley National Laboratory, the other by Brian Schmidt of the Australian National University, found something even stranger. Through observation of distant supernovas, these astronomers measured how the expansion rate of the universe has changed over time. Because of gravity’s relentless pull, most everyone expected that the expansion would be slowing. But the data from both groups showed the opposite. The expansion of the universe is speeding up. Something must be pushing outward, and luckily Einstein's general theory of relativity provides a ready-made candidate: A uniform, diffuse energy spread throughout space can act as an antigravity force. Since this energy gives off no light, it’s called dark energy.

Collectively, the observations establish that about 23 percent of the universe is dark matter and about 72 percent is dark energy. Everything else is squeezed into the remaining few percent.

Several experiments are now under way to identify dark matter. Scientists are searching for what they suspect is an exotic species of particle. Some studies are looking for clues by analyzing particles bombarding Earth from space; others, like the Large Hadron Collider, will analyze collisions between extremely fast-moving protons that have the potential to create dark matter in the lab. We are guardedly optimistic that we’ll be able to identify dark matter soon.

By contrast, the question of dark energy is wide open. What is its origin? What determined its quantity? Does the amount stay constant or vary? These are critical questions. Calculations show that if the amount of dark energy had been slightly larger, the universe would have blown apart so quickly that life as we know it could not exist.



What is the purpose of noncoding DNA?
A typical human cell contains more than 6 feet of tightly cornrowed DNA. But only about an inch of that carries the codes needed to make proteins, the day laborers of biology. What’s the other 71 inches?

It’s junk, Nobelist Sydney Brenner said after it was discovered back in the 1970s. The name stuck, but biologists have known for a while that the junk DNA must contain treasures. If noncoding DNA were just along for the ride, it would rapidly incorporate mutations. But long stretches of noncoding DNA have remained basically the same for many millions of years - they must be doing something.

Now scientists are starting to speculate that proteins, and the regular DNA that creates them, are just the nuts and bolts of the system. “They’re like the parts for a 757 jet sitting on the floor of a factory,” says University of Queensland geneticist John Mattick. The noncoding DNA is likely “the assembly plans and control systems.” Unfortunately, he concludes, because we’ve spent 30 years thinking of it as junk, we’re just now learning how to read it.

Will forests slow global warming - or speed it up?
Everyone knows that forests are good for the environment. By removing carbon dioxide - the principal cause of global warming - from the air, trees grow. And the bigger and more plentiful the trees, the more CO2 they sequester. This makes forests a helpful bulwark against climate change. But despite the best carbon-eating action of our flora, the planet is heating up. This raises the specter of a future in which, paradoxically, forests don’t reduce climate change but - as they are destroyed - make it worse.

We don’t know which way it will go, because we know so little about forests themselves. Scientists estimate that up to 50 percent of all species live in forest canopies - three-dimensional labyrinths largely invisible from the ground - but virtually no one can tell you what lives in any given cubic meter of canopy, at any height, anywhere in the world. We don’t even have names for the most common species of trees in the Amazon.

But scientists can readily foresee the way in which these carbon killers instead become dangerous carbon spewers. As the climate warms, many forests will become drier, putting the trees under stress. Typically, this sets the stage for huge outbreaks of insects, which can strip trees of their leaves, killing large numbers of them. Once dead, trees release their carbon into the air - already roughly 25 percent of the greenhouse gases pouring into the atmosphere come from forests that are burned or cut down. Further, if they no longer exist, forests can’t absorb CO2 anymore, and the bare ground that is exposed heats up faster - forests are like giant swamp coolers for the planet. Will this happen?

Hard to say. If we don’t know which insects are eating the leaves now, we can’t gauge how global warming will affect them or how they in turn might affect forests. “You can’t possibly answer more general questions about forests until you at least know what lives there,” says Margaret Lowman, canopy scientist at New College of Florida. “It’s more than just giving names to things. We need to know what’s common and what’s rare, and what these species are doing, before we can go to the next level, which is to try to see the interaction between forests and Earth’s climate.”



What happens to information in a black hole?
Inside a black hole, gravity is so intense that neither matter nor energy can escape. But in 1975, Cambridge physicist Stephen Hawking said that something does escape: random particles now known as “Hawking radiation.” So if black holes eat organized matter - chock-full of information - and then spit out random noise, where does the information go?

Hawking said it gets locked up inside as the black hole eventually evaporates, destroying the information in the process. Which creates a paradox. Because the rules of physics say information, like matter and energy, can’t be destroyed.

Hawking was confident. He convinced his super-genius counterpart at Caltech, physicist Kip Thorne, that he was right - but Thorne’s colleague John Preskill remained skeptical. So they made a bet: Hawking and Thorne said the singularity at the heart of a black hole destroyed information; Preskill said “nuh-uh.” Then, in 2004, Hawking reversed his position and decided that things that fall into a singularity aren’t lost; their information does leak out, though no one, except maybe Hawking himself, can explain why or how.

He presented Preskill with a baseball encyclopedia from which, presumably, information can be retrieved at will. Preskill accepted only grudgingly. “Even if you’re Stephen Hawking, it’s possible to be wrong twice,” he says.



What causes ice ages?
Scientists know that small-scale ice ages occur every 20,000 to 40,000 years and that massive ones happen every 100,000 years or so. They just don’t know why. The current working theory - first proposed in 1920 by Serbian engineer Milutin Milankovitch - is that irregularities in Earth’s orbit change how much solar energy it absorbs, resulting in sudden (well, geologically speaking) cooling. While this neatly fits the timing of short-term events, there’s still a big problem. Over the past few decades, studies have shown that orbital fluctuations affect solar energy by 1 percent or less - far too little to produce massive climate shifts on their own. “The mystery is, what is the amplification factor?” says University of Michigan geologist and climatologist Henry Pollack. “What takes a small amount of solar energy change and produces a large amount of glaciation?”

Studies of ice and seabed cores reveal that temperature rise and fall is heavily correlated with changes in greenhouse-gas concentrations. But it’s a chicken-and-egg problem. Are CO2 rises and falls a cause of climate change or an effect? If they are a cause, what initiates the change? Figuring this out could tell us a great deal about the current global warming problem and how it might be solved. But as Matthew Saltzman, a geologist at Ohio State puts it, “We need to know why greenhouse gases fluctuated in prehuman times, and we just don’t.”
- John Hockenberry, WIRED contributing editor

Collaborate: Edit this text at the Wired Wiki.


How does the brain calculate movement?
All of science, it seems, wants to know how brains give animals complex motor skills. Robotics, physics, neuro-physiology, and medicine are just a few of the disciplines studying the topic. The paradox is that brains - even large human brains - are notoriously slow by processing standards: Set your hand on a hot plate and it takes full milliseconds to feel the burn. So how does the same gooey substance simultaneously acquire visual data, calculate positional information, and gauge trajectory to let a lizard’s tongue snatch a fly, a dog’s mouth catch a Frisbee, or a hand catch a falling glass? “With the thousands of muscles in the body, the motor cortex clearly isn’t ‘thinking’ in any sense about movement,” says UC San Diego neuroscientist Patricia Churchland. According to Stanford University’s Krishna Shenoy, the brain seems to create an internal model of the physical world, then, like some super-sophisticated neural joystick, traces intended movements onto this model. “But it’s all in a code that science has yet to crack,” he says. Whatever that code is, it’s not about size. “Even a cat’s brain can modify the most complicated motions while executing them.”



Why do the poles reverse?
Almost 800,000 years ago, compasses would have pointed south. A little further back, they would have pointed north. Evidence for such reversals comes from lava flows and cracks in the ocean floor, places where newly formed rock makes a record of the magnetic polarity.

We know that as Earth spins, the liquid metal in its molten core churns, generating an electro-magnetic field. We also know that shifts in the movement of the core can alter the polarity of that field and that it takes about 7,000 years for the orientation to flip-flop once the process of reversal begins - something that happens on average two or three times every million years. But no one knows how it works. Some scientists believe the poles migrate slowly from one end to the other; some theorize that the magnetic field shuts down and then reemerges with opposite polarity.

As for what triggers the event, experts have suggested that a huge impact - say, a giant meteor - could create a disturbance in the core. But research by Gary Glatzmaier, a planetary science professor at UC Santa Cruz, shows that a violent catalyst isn’t needed. So why does pole reversal occur? “That’s like asking, why do hurricanes start?” he says. “Well, they’re always trying to, and sometimes the conditions are just right.”

How does the brain produce consciousness?
That slab of meat in your skull - a 3-pound walnut of wetware - somehow puts the you in you. Nobody really knows how. Philosophers since Plato have pondered the issue. And probing the relationship between mind and body was the central goal of psychology until behaviorists closed the door on mind in the early 20th century and focused on observable actions. But only recently have scientists tried to tackle consciousness, spurred by new tools like functional MRI and PET scans that can augment traditional clinical research by showing brain activity.

Already, however, these researchers find themselves haggling over familiar questions. Is consciousness merely wakefulness? No, we’re conscious when we dream. Is it our sense of personal identity? Yes, but surely it’s also the stream of words and images that runs through what William James called the “extended present,” the immediate workspace of our minds. It’s perception, but it’s also reflection - summoning up visual and verbal constructions, imaginary or real. It’s simulation, mentally walking ourselves through situations before we face them, learning and practicing, hoping to avert pratfalls.

No surprise, then, given this confusion, that scientific theories on consciousness are all over the map. Antonio Damasio, a neurologist and neuroscientist at the University of Southern California who studies brain-damaged patients, speculates that self-awareness evolved in humans as a regulatory mechanism, a way for the brain to understand what is going on with the body. He calls “the coming of the sense of self into the world of the mental” a “turning point in the long history of life.” Caltech’s Christof Koch, who studies vision as the starting point for mind, believes that people have specific “consciousness neurons.” And Bernard Baars of the Neurosciences Institute in San Diego suggests that consciousness is a controlling gateway to unconscious mechanisms such as working memory, word meanings, visual memory, and learning.

Some philosophers still argue that consciousness is too subjective to explain, or that it is the irreducible result of matter organized in a specific way. That philosophic black-boxing is probably more nostalgic than scientific, a clinging to the idea of a spirit or soul. Without that, after all, we’re just organisms - more complex, but no less predictable, than dung beetles. But scientists live to reduce the seemingly irreducible, and sentimentality is off-limits in the lab. Understanding consciousness means finding the biophysical mechanisms that generate it. Somewhere behind your eyes, that meat becomes the mind.



Why is fundamental physics so messy?
When the job description calls for reverse-engineering the universe, the pool of successful applicants will naturally include enough self-impressed overachievers to make second-degree ego burn a hazard of the trade. But even the leading researchers in theoretical particle physics, the most headstrong of the scientific elite, are humbled by their failure to figure out why the cosmos is such a mathematically elegant mess.

The equations themselves are lovely, describing how a baseball arcs parabolically between earth and sky or how an electron jumps around a nucleus or how a magnet pulls a pin. The ugliness is in the details. Why does the top quark weigh roughly 40 times as much as the bottom quark and, even worse, thousands of times more than the up quark and down quark combined? Maddeningly, the proton weighs almost, but not exactly, the same as its counterpart, the neutron. And wasn’t the electron enough? Did we really need its two fat cousins, the muon and the tau?

It’s as though some software engineer crafted a beautiful, bugless operating system - the laws of physics - and then fed it with random data, the output from a lava lamp, or moths bashing at a window screen. Garbage in, garbage out, generating the weird, starry heap of a universe we call home.

Optimists hope the randomness is actually pseudo-random - complexity in disguise, with The Algorithm at the core of everything, churning out the details, demanding that things be what they be.

The bet is that this codex lies tangled somewhere inside superstring theory. Deep within the quarks, face-to-face with the universal machine language, are tiny snippets of something - no one really knows what - called strings and branes. They wiggle around in their 10 or so dimensions and conjure up the universe, this universe, with a spec sheet about as symmetrical as a bingo card.

Superstring theory turns out to be more complex than the universe it is supposed to simplify. Research suggests there may be 10500 universes... or 10500 regions of this universe, each ruled by different laws. The truths that Newton, Einstein, and dozens of lesser lights have uncovered would be no more funda-mental than the municipal code of Nairobi, Kenya, or Terre Haute, Indiana. Physicists would just be geographers of some accidental terrain.

Things might look brighter next year, when the Large Hadron Collider - the biggest scientific project ever - should be running full blast, using superconducting magnets to smash matter hard enough to break through the floor of reality. Physicists hope that down in the cellar they’ll find the Higgs boson - skulking in the dark like a centipede, furtively giving the other particles their variety of masses.

Or maybe they’ll just find more junk. If so, the search will probably be over for now, placed on hold for the next civilization with the temerity to believe that people, pawns in the ultimate chess game, are smart enough to figure out the rules.

How doth human language evolve?
Lots of animals make noise; much of it even conveys information. But for sheer complexity, for developed syntax and grammar, and for the ability to articulate abstract concepts, you can’t beat human speech. MIT linguist Noam Chomsky and Harvard experimental psychologist Steven Pinker say it’s genetic. Pinker theorizes that language emerged about 200,000 years ago, when early humans who were efficient communicators were more likely to pass on their genes. (Less-than-efficient communicators were more likely to scream incoherently - instead of imparting an escape plan - before being devoured by a saber-toothed tiger.) A little more evidence: People with particular genetic defects have specific difficulties with speech and grammar.

Other scientists argue that spoken words are actually an outgrowth of other human skills, such as planning, memory, and logic. “There is no ‘language gene,’” says Luc Steels, a computer scientist at the Free University of Brussels in Belgium. “Language was a cultural breakthrough, like writing.” Steels built robots with a set of general intelligence traits but without a language module in their software, and they developed grammar and syntax systems similar to those of human language.

Blame neuroscientists for the controversy. The parts of the brain thought to be responsible for language are as well - understood as the rest of the brain, which is to say: not so much.

Why can’t we predict the weather?
A few years ago, weather forecasts were totally unreliable beyond a couple of days; today better computer models make them accurate as far as a week out. That’s fine for figuring out how to pack for a business trip or whether you need to rent a big tent for the wedding reception. The trouble starts when you want to build a computer model to predict the weather over decades or centuries. In 1961, a meteorologist named Edward Lorenz was running a computerized weather simulation and decided to round a few decimal places off one of the parameters. The tiny tweak completely changed weather patterns. This became known as the butterfly effect: A butterfly flapping its wings in Brazil sets off a tornado in Texas. Lorenz’s shortcut helped launch chaos theory and sparked an obsession among meteorologists with feeding as-perfect-as-possible data into their models in an attempt to lengthen their forecast window.

But even refining precision doesn’t get us to long-term prediction. For that, climatologists need to understand boundary conditions, like the interactions between the atmosphere and the oceans. The goal, says Louis Uccellini, director of the National Centers for Environmental Prediction, is to model Earth as a single climate system. Then we can figure out what’ll happen to it next.

Why don’t we understand turbulence?
An airplane’s sudden loss of lift, liquid fuel igniting inside a rocket engine, blood clotting in an artificial heart valve - turbulence can be deadly. When a liquid or gas moves smoothly, it’s easy to go with the flow. But change certain conditions - speed, viscosity, surrounding space - and the orderly current dissolves into whirling chaos. If we could model the physics of turbulent flow in software, we could use the model’s output to design safer, more-energy-efficient machines.

The trouble is complexity. When a stream of water or air goes turbulent, groups of molecules form vortices of widely varying sizes that interact in seemingly random ways. To determine the outcome, we’d have to measure the initial conditions to an impractical degree of precision. And in any case, tracking a zillion particles is beyond the reach of any conceivable computer.

If we can’t predict how a given turbulent system will behave, at least we can simplify it enough to zero in on statistical likelihoods. The key is the transition zone: the precise spot where smooth flow breaks down. Here, chaos theory describes the proliferation of whorls, while the science of cellular automata, which imposes a grid over reality, reduces complex interactions to a limited number of simple equations. These mathematical tricks don’t bring turbulence to heel, but they do get engineers close enough to make reasonably sure your plane touches down on time... and in one piece.

Is the universe actually made of information?
Humans have talked about atoms since the time of the ancients, and ever-smaller fundamental particles of matter followed. But no one even conceived of bits until the middle of the 20th century. The bit is a fundamental particle, too, but of different stuff altogether: information. It is not just tiny, it is abstract - a flip-flop, a yes-or-no. Now that scientists are finally starting to understand information, they wonder whether it’s more fundamental than matter itself. Perhaps the bit is the irreducible kernel of existence; if so, we have entered the information age in more ways than one.

The quantum pioneer John Archibald Wheeler, perhaps the last surviving collaborator of both Albert Einstein and Niels Bohr, poses this conundrum in oracular monosyllables: “It from bit.” For Wheeler, it is both an unanswered question and a working hypothesis, the idea that information gives rise, as he writes, to “every it - every particle, every field of force, even the spacetime continuum itself.” This is another way of fathoming the role of the observer, the quantum discovery that the outcome of an experiment is affected, or even determined, when it is observed. “What we call reality,” Wheeler writes coyly, “arises in the last analysis from the posing of yes-no questions.” He adds, “All things physical are information-theoretic in origin, and this is a participatory universe.”

Earlier generations would not have been able to imagine information as so... meaty. How could this abstract quality be substantial enough - enough of a thing - to be the substrate of all existence? Its newly powerful status began to emerge in 1948, when Claude Shannon at Bell Labs invented information theory. His scientific framework introduced the bit, defined concepts like signal and noise, and pointed the way to modems and compact discs, cell phones and cyberspace, Moore’s law, Metcalfe’s law, and a world of silicon valleys and alleys.

Now the whole universe is seen as a computer - a cosmic processor of information. When photons and electrons and other particles interact, what are they really doing? Exchanging bits, transmitting quantum states. Every burning star, every silent nebula, every particle leaving its ghostly trace in a cloud chamber is an information processor. The universe computes its own destiny.

How much does it compute? How fast? How big is its total information capacity, its memory space? What is the link between energy and information - the energy cost of flipping a bit?

These are hard questions, but they are not mystical or metaphorical. Physicists and quantum - information theorists are using the bit to look anew at the mysteries of thermodynamic entropy and at those notorious information swallowers, black holes. They’re doing the math and producing tentative answers. To the small questions, that is.

For Wheeler, the big question of which comes first, the material universe or information, is a way of posing an even bigger question: “How come existence?” How does something arise from nothing? And that, it’s safe to say, is a question science cannot answer.


Why do some diseases turn into pandemics?
A pandemic - a transnational outbreak of disease - is really just a pathogen on a hot streak. After all, germs want what we all want, evolution-wise: to spread their genes. Success in the germ world means infecting a whole lot of people, reproducing, then infecting a whole lot more. The efficiency with which a microbe pulls that off depends on how the bug works and how the target - us - works. HIV, for example, loves a promiscuous-but-prudish population; human beings like to have sex but don’t like to talk about condoms. The Ebola virus, on the other hand, hasn’t found victims who exchange fluids with enough other people before dying (horribly). So changes in culture like jet airplane travel can make a population more vulnerable to a previously contained disease. And changes in a germ - say, if avian influenza H5N1 acquires the right genes from the human version - can be like spinach to Popeye. But no one knows how to predict when either of those things might happen. So don’t forget to wash your hands. A lot.
- Elizabeth Svoboda

Can mathematicians prove the Riemann hypothesis?
In the early 1900s, German mathematician David Hilbert said that if he awakened after 1,000 years of sleep, the first question he’d ask would be: Has the Riemann hypothesis been proven?

It’s been only 100 years, but the answer so far is no. Put forward by Bernhard Riemann in 1859, the hypothesis would establish the distribution of zeroes on something called the Riemann zeta function. That, in turn, correlates to the intervals between prime numbers.

Prime numbers (numbers that can be divided only by 1 and themselves: 2, 3, 5, et cetera) are the building blocks of mathematics, because all other numbers can be arrived at by multiplying them together (e.g., 150 = 2 x 3 x 5 x 5). Understanding the primes sheds light on the entire landscape of numbers, and the greatest mystery concerning primes is their distribution. Sometimes primes are neighbors (342,047 and 342,049). Other times a prime is followed by desert of nonprimes before the next one pops up (396,733 and 396,833).

Making sense of this bizarre arrangement would offer a base from which to solve numerous other long-standing math problems and could affect related fields, like quantum physics. Until they know whether it’s true, though, mathematicians can’t use Riemann. Princeton mathematician Peter Sarnak put it this way: “Right now, when we tackle problems without knowing the truth of the Riemann hypothesis, it’s as if we have a screwdriver. But when we have it, it’ll be more like a bulldozer.” Which is why the Riemann hypothesis has been named one of the Clay Millennium prize problems: Whoever proves (or disproves) it gets $1 million.

Why do we die when we do?
When asked why things die, physicists don’t hesitate: It’s the second law of thermodynamics. Everything, be it mineral, plant, or animal, a Lexus or a mitral valve or a protein in a cell wall, eventually breaks down. What that looks like in humans - what exactly it is that makes us age - is a question for biologists. It’s DNA damage by free radicals, maybe, or shrinkage of the caps on chromosomes. Telomeres, as they’re called, get smaller with each cell division. When they hit a certain length: apoptosis, or cell death.

But for the best explanation of the when of our mortality, you have to ask the ecologists. They have a rough way of calculating life span. Basically, the larger the species, the slower its energy-delivery systems (all that internal tubing, all that complicated traffic); the lower the metabolic rate, the longer the life. Animals can live fast or burn slow. “If you’ve ever picked up a little mouse, it’s effectively vibrating, its heart is beating so fast,” says Brian Enquist, an ecologist at the University of Arizona. “A blue whale’s heart is like a slow metronome or the ringing of a church bell, a very slow bong... bong... bong.”

Yet both get roughly the same number of beats - 100 million and change, spread over two years for the mouse and roughly 80 years for the whale. “There’s this beautiful invariant: All living creatures have about the same amount of energetic life,” Enquist says. Yet while many animals outmass us humans, few outlive us. Why the long life for us lightweights? Like the hide of a rhinoceros or the claws of a tiger, human cleverness makes us tough to kill. That means random longevity-enhancing genes have a pretty good shot at evading natural selection. A bird that gets eaten in its second month of life never passes on whatever fluke mutation might have given it - and its progeny - an extra year or two.

As for the ecologists’ neat mathematical equation, “primates are a little different,” Enquist concedes. “For the number of heartbeats we have in our lives, we live a little longer than we should, and it’s a big mystery why that is.” He speculates that the difference for us outliers will be explained by brain size - or, rather, by how much time and energy humans spend growing their brains relative to the rest of their bodies. Why lavishing that extra energy on brainmaking translates into disproportionately long lives, Enquist isn’t sure (and at 37, he has only about 36 more years to figure it out). Luckily, the same biological aberration that allows people to contemplate their own mortality is responsible, albeit indirectly, for delaying it.

What causes gravity?
Isaac Newton first figured out the fundamental nature of gravity in the late 1600s. By unraveling the mysteries of planetary movement and Earth’s pull on its inhabitants, he described modern physics. But more than three centuries later, that’s still all we have: an understanding of the effect, with almost no grasp of the cause. Is gravity carried by an elementary particle? Is it some fundamental feature of spacetime we don’t understand? Why can’t gravity be reconciled with the better-understood quantum forces? All these questions remain unanswered. Many scientists think gravity must be generated by a massless particle, and have even dubbed it the graviton. But experiments to detect this entity (using a super-collider, for example) can’t be performed with current technology. “To generate the energy required to investigate a gravity particle, we believe, would produce a black hole,” says Harvard physicist Lisa Randall. “Space itself just breaks down.” Right now, mathematics is the best investigative tool for getting gravity to square with subatomic forces like electromagnetism. But making the math work requires dealing with exotic string theory notions like invisible 10-dimensional space. “We’ve always understood that gravity was different,” Randall says. “If we figure out why in the next 30 years, there will be another big, new question. I guarantee it.”


Why can’t we regrow body parts?
Slice through your finger with a kitchen knife and it’s bye-bye pinkie. But lop a leg off a salamander and it’ll grow a new one with little more fuss than we expend on a broken nail. Scientists looking to reverse tissue damage caused by disease, injury, or aging want to know how the agile amphibians do it - and why we can’t.

When salamanders are wounded, skin, bone, muscle, and blood vessels at the site revert to their undifferentiated states, forming a spongy mass called a blastema. It’s as if the cells go back in time and then retrace their steps to assemble a new organ or limb.

We seem to have this same basic program written in our genes: As embryos, we grew arms, legs, heart, lungs, and so on with no problem, and even as adults, one type of cell in our nervous system can dedifferentiate to repair damage. Others in our liver show similar flexibility. But for the most part, our regenerative pathway appears to be roadblocked. The reason may be that the rapid cell division required to sprout a new limb looks to the body a lot like the unchecked growth of cancer. Our longevity makes us vulnerable to accumulated DNA mutations, so we’ve evolved molecular brakes to keep tumors at bay. In order to unlock our regenerative capabilities, scientists will have to figure out how to override the stop signals without sparking a malignant rampage.
Why do we still have big questions?
Information is expanding 10 times faster than any product on this planet - manufactured or natural. According to Hal Varian, an economist at UC Berkeley and a consultant to Google, worldwide information is increasing at 66 percent per year - approaching the rate of Moore’s law - while the most prolific manufactured stuff - paper, let’s say, or steel - averages only as much as 7 percent annually. By this rough metric, knowledge is growing exponentially. Indeed, the current pace of discovery is accelerating so rapidly that it seems as if we’re headed for that rapture of enlightenment known as the Singularity.

In fact, we may be nearly there. A decade ago, author John Horgan interviewed prestigious scientists in many fields and concluded in his book The End of Science that all the big questions had been answered. The world of science has been roughly mapped out - structure of atoms, nature of light, theories of relativity and evolution, and so on - and all that remains now is to color in the details.

So why do we still have so many unanswered questions? Take the current state of physics: We don’t know what 96 percent of the universe is made of. We call it “dark matter,” a euphemism for our ignorance.

Yet it is also clear that we know far more about the universe than we did a century ago, and we have put this understanding to practical use - in consumer goods like GPS receivers and iPods, in medical devices like MRI scanners, and in engineered materials like photovoltaic cells and carbon nanotubes. Our steady and beneficial progress in knowledge comes from steady and beneficial progress in tools and technology. Telescopes, microscopes, fluoroscopes, and oscilloscopes allow us to see in new ways and to know more about the universe.

The paradox of science is that every answer breeds at least two new questions. More answers mean even more questions, expanding not only what we know but also what we don’t know. Every new tool for looking farther or deeper or smaller allows us to spy into our ignorance. Future technologies such as artificial intelligence, controlled fusion, and quantum computing (to name a few on the near horizon) will change the world - that means the biggest questions have yet to be asked.






Wednesday, August 1, 2007

How can we see distant stars in a young universe?

If the universe is young and it takes millions of years for light to get to us from many stars, how can we see them? Did God create light in transit? Was the speed of light faster in the past? Does this have anything to do with the big bang?
Some stars are millions of light-years away. Since a light-year is the distance traveled by light in one year, does this mean that the universe is very old?

Despite all the biblical and scientific evidence for a young earth/universe, this has long been a problem. However, any scientific understanding of origins will always have opportunities for research—problems that need to be solved. We can never have complete knowledge and so there will always be things to learn.

One explanation used in the past involved light travelling along Riemannian surfaces (a mathematical description of curved space). Such a model cannot be valid because if space were sufficiently curved to explain light travel, then our universe would be impossibly dense and small, which observations contradict.


Created light?

Perhaps the most commonly used explanation is that God created light ‘on its way,’ so that Adam could see the stars immediately without having to wait years for the light from even the closest ones to reach the earth. While we should not limit the power of God, this has some rather immense difficulties.

It would mean that whenever we look at the behavior of a very distant object, what we see happening never happened at all. For instance, say we see an object a million light-years away which appears to be rotating; that is, the light we receive in our telescopes carries this information ‘recording’ this behavior. However, according to this explanation, the light we are now receiving did not come from the star, but was created ‘en route,’ so to speak.

This would mean that for a 10,000-year-old universe, that anything we see happening beyond about 10,000 light-years away is actually part of a gigantic picture show of things that have not actually happened, showing us objects which may not even exist.

To explain this problem further, consider an exploding star (supernova) at, say, an accurately measured 100,000 light-years away. Remember we are using this explanation in a 10,000-year-old universe. As the astronomer on earth watches this exploding star, he is not just receiving a beam of light. If that were all, then it would be no problem at all to say that God could have created a whole chain of photons (light particles/waves) already on their way.

However, what the astronomer receives is also a particular, very specific pattern of variation within the light, showing him/her the changes that one would expect to accompany such an explosion—a predictable sequence of events involving neutrinos, visible light, X-rays and gamma-rays. The light carries information recording an apparently real event. The astronomer is perfectly justified in interpreting this ‘message’ as representing an actual reality—that there really was such an object, which exploded according to the laws of physics, brightened, emitted X-rays, dimmed, and so on, all in accord with those same physical laws.

Everything he sees is consistent with this, including the spectral patterns in the light from the star giving us a ‘chemical signature’ of the elements contained in it. Yet the ‘light created en route’ explanation means that this recorded message of events, transmitted through space, had to be contained within the light beam from the moment of its creation, or planted into the light beam at a later date, without ever having originated from that distant point. (If it had started from the star—assuming that there really was such a star—it would still be 90,000 light years away from earth.)

To create such a detailed series of signals in light beams reaching earth, signals which seem to have come from a series of real events but in fact did not, has no conceivable purpose. Worse, it is like saying that God created fossils in rocks to fool us, or even test our faith, and that they don’t represent anything real (a real animal or plant that lived and died in the past). This would be a strange deception.

Did light always travel at the same speed?

An obvious solution would be a higher speed of light in the past, allowing the light to cover the same distance more quickly. This seemed at first glance a too-convenient ad hoc explanation. Then some years ago, Australian Barry Setterfield raised the possibility to a high profile by showing that there seemed to be a decreasing trend in the historical observations of the speed of light (c) over the past 300 years or so. Setterfield (and his later co-author Trevor Norman) produced much evidence in favor of this theory.1 They believed that it would have affected radiometric dating results, and even have caused the red-shifting of light from distant galaxies, although this idea was later overturned, and other modifications were also made.

Much debate has raged to and fro among equally capable people within creationist circles about whether the statistical evidence really supports c decay (‘cdk’) or not.

The biggest difficulty, however, is with certain physical consequences of the theory. If c has declined the way Setterfield proposed, these consequences should still be discernible in the light from distant galaxies but they are apparently not. In short, none of the theory’s defenders have been able to answer all the questions raised.

A new creationist cosmology

Nevertheless, the c-decay theory stimulated much thinking about the issues. Creationist physicist Dr Russell Humphreys says that he spent a year on and off trying to get the declining c theory to work, but without success. However, in the process, he was inspired to develop a new creationist cosmology which appears to solve the problem of the apparent conflict with the Bible’s clear, authoritative teaching of a recent creation.

This new cosmology is proposed as a creationist alternative to the big bang theory. It passed peer review, by qualifying reviewers, for the 1994 Pittsburgh International Conference on Creationism.2 Young-earth creationists have been cautious about the model,3 which is not surprising with such an apparently radical departure from orthodoxy, but Humphreys has addressed the problems raised.4 Believers in an old universe and the big bang have vigorously opposed the new cosmology and claim to have found flaws in it.5 However, Humphreys has been able to defend his model, as well as develop it further.6 The debate will no doubt continue.

This sort of development, in which one creationist theory, c-decay, is overtaken by another, is a healthy aspect of science. The basic biblical framework is non-negotiable, as opposed to the changing views and models of fallible people seeking to understand the data within that framework (evolutionists also often change their ideas on exactly how things have made themselves, but never whether they did).

A clue

Let us briefly give a hint as to how the new cosmology seems to solve the starlight problem before explaining some preliminary items in a little more detail. Consider that the time taken for something to travel a given distance is the distance divided by the speed it is traveling. That is:

Time = Distance / Speed

When this is applied to light from distant stars, the time calculates out to be millions of years. Some have sought to challenge the distances, but this is a very unlikely answer.7

Astronomers use many different methods to measure the distances, and no informed creationist astronomer would claim that any errors would be so vast that billions of light-years could be reduced to thousands, for example. There is good evidence that our own Milky Way galaxy is 100,000 light years across!

If the speed of light (c) has not changed, the only thing left untouched in the equation is time itself. In fact, Einstein’s relativity theories have been telling the world for decades that time is not a constant.

Two things are believed (with experimental support) to distort time in relativity theory—one is speed and the other is gravity. Einstein’s general theory of relativity, the best theory of gravity we have at present, indicates that gravity distorts time.

This effect has been measured experimentally, many times. Clocks at the top of tall buildings, where gravity is slightly less, run faster than those at the bottom, just as predicted by the equations of general relativity (GR).8

When the concentration of matter is very large or dense enough, the gravitational distortion can be so immense that even light cannot escape.9 The equations of GR show that at the invisible boundary surrounding such a concentration of matter (called the event horizon, the point at which light rays trying to escape the enormous pull of gravity bend back on themselves), time literally stands still.

Using different assumptions …

Dr Humphreys’ new creationist cosmology literally ‘falls out’ of the equations of GR, so long as one assumes that the universe has a boundary. In other words, that it has a center and an edge—that if you were to travel off into space, you would eventually come to a place beyond which there was no more matter. In this cosmology, the earth is near the center, as it appears to be as we look out into space.

This might sound like common sense, as indeed it is, but all modern secular (big bang) cosmologies deny this. That is, they make arbitrary assumption (without any scientific necessity) that the universe has no boundaries—no edge and no center. In this assumed universe, every galaxy would be surrounded by galaxies spread evenly in all directions (on a large enough scale), and so, therefore, all the net gravitational forces cancel out.

However, if the universe has boundaries, then there is a net gravitational effect toward the center. Clocks at the edge would be running at different rates to clocks on the earth. In other words, it is no longer enough to say God made the universe in six days. He certainly did, but six days by which clock? (If we say ‘God’s time’ we miss the point that He is outside of time, seeing the end from the beginning.)10

There appears to be observational evidence that the universe has expanded in the past, supported by the many phrases God uses in the Bible to tell us that at creation he ‘stretched out’11 (other verses say ‘spread out’) the heavens.

If the universe is not much bigger than we can observe, and if it was only 50 times smaller in the past than it is now, then scientific deduction based on GR means it has to have expanded out of a previous state in which it was surrounded by an event horizon (a condition known technically as a ‘white hole’—a black hole running in reverse, something permitted by the equations of GR).

As matter passed out of this event horizon, the horizon itself had to shrink—eventually to nothing. Therefore, at one point this earth (relative to a point far away from it) would have been virtually frozen. An observer on earth would not in any way ‘feel different.’ ‘Billions of years’ would be available (in the frame of reference within which it is traveling in deep space) for light to reach the earth, for stars to age, etc.—while less than one ordinary day is passing on earth. This massive gravitational time dilation would seem to be a scientific inevitability if a bounded universe expanded significantly.

In one sense, if observers on earth at that particular time could have looked out and ‘seen’ the speed with which light was moving toward them out in space, it would have appeared as if it were traveling many times faster than c. (Galaxies would also appear to be rotating faster.) However, if an observer in deep space was out there measuring the speed of light, to him it would still only be traveling at c.

There is more detail of this new cosmology, at layman’s level, in the book by Dr Humphreys, Starlight and Time, which also includes reprints of his technical papers showing the equations.12

It is fortunate that creationists did not invent such concepts such as gravitational time dilation, black and white holes, event horizons and so on, or we would likely be accused of manipulating the data to solve the problem. The interesting thing about this cosmology is that it is based upon mathematics and physics totally accepted by all cosmologists (general relativity), and it accepts (along with virtually all physicists) that there has been expansion in the past (though not from some imaginary tiny point). It requires no ‘massaging’—the results ‘fall out’ so long as one abandons the arbitrary starting point which the big bangers use (the unbounded cosmos idea, which could be called ‘what the experts don’t tell you about the “big bang”’).

Caution

While this is exciting news, all theories of fallible men, no matter how well they seem to fit the data, are subject to revision or abandonment in the light of future discoveries. What we can say is that at this point a plausible mechanism has been demonstrated, with considerable observational and theoretical support.

What if no one had ever thought of the possibility of gravitational time dilation? Many might have felt forced to agree with those scientists (including some Christians) that there was no possible solution —the vast ages are fact, and the Bible must be ‘reinterpreted’ (massaged) or increasingly rejected. Many have in fact been urging Christians to abandon the Bible’s clear teaching of a recent creation [see Q&A: Genesis] because of these ‘undeniable facts.’ This reinterpretation also means having to accept that there were billions of years of death, disease, and bloodshed before Adam, thus eroding the creation/Fall/restoration framework within which the gospel is presented in the Bible.

However, even without this new idea, such an approach would still have been wrong-headed. The authority of the Bible should never be compromised as mankind’s ‘scientific’ proposals. One little previously unknown fact, or one change in a starting assumption, can drastically alter the whole picture so that what was ‘fact’ is no longer so.

This is worth remembering when dealing with those other areas of difficulty which, despite the substantial evidence for Genesis creation, still remain. Only God possesses infinite knowledge. By basing our scientific research on the assumption that His Word is true (instead of the assumption that it is wrong or irrelevant) our scientific theories are much more likely, in the long run, to come to accurately represent reality.

References and notes

  1. T.G. Norman and B. Setterfield, The Atomic Constants, Light and Time (privately published, 1990). Return to text.
  2. D. Russell Humphreys, Progress Toward a Young-earth Relativistic Cosmology, Proceedings 3rd ICC, Pittsburgh, pp. 267–286, 1994. Return to text.
  3. J. Byl, On Time Dilation in Cosmology, Creation Research Society Quarterly, 34(1):26–32, 1997. Return to text.
  4. D.R. Humphreys, It’s Just a Matter of Time, Creation Research Society Quarterly, 34(1):32–34, 1997. Return to text.
  5. S.R. Conner and D.N. Page, Starlight and Time is the Big Bang, CEN Technical Journal, 12(2):174–194, 1998. Return to text.
  6. D.R. Humphreys, New Vistas of Space-time Rebut the Critics, CEN Technical Journal, 12(2):195–212, 1998. [Ed. note: Refs 5 and 6, as well as other criticisms of Dr Humphreys’ model, with his responses, were published in the CEN Technical Journal, and are available here. Return to text.
  7. Many billions of stars exist, many just like our own sun, according to the analysis of the light coming from them. Such numbers of stars have to be distributed through a huge volume of space, otherwise we would all be fried. Return to text.
  8. The demonstrable usefulness of GR in physics can be separated from certain ‘philosophical baggage’ that some have illegitimately attached to it, and to which some Christians have objected. Return to text.
  9. Such an object is called a ‘black hole.’ Return to text.
  10. Genesis 1:1; Ecclesiastes 3:11; Isaiah 26:4; Romans 1:20; 1 Timothy 1:17; and Hebrews 11:3. Interestingly, according to GR, time does not exist without matter. Return to text.
  11. For example, Isaiah 42:5; Jeremiah 10:12; Zechariah 12:1. Return to text.
  12. D. Russell Humphreys, Starlight and Time, Master Books, Green Forest, AR, 1994. Return to text.

This chapter from the book The Revised and Expanded Answers Book, published and graciously provided at no charge to Answers in Genesis by Master Books, a division of New Leaf Press (Green Forest, Arkansas).

By downloading this material, you agree to the following terms with respect to the use of the requested material: AIG grants you a non-exclusive, non-transferable license to print or download one (1) copy of the copyrighted work. The copyrighted work will be used for non-commercial, personal purposes only. You may not prepare, manufacture, copy, use, promote, distribute, or sell a derivative work of the copyrighted work without the express approval of AIG. Approval must be expressed and in writing, and failure to respond shall not be deemed approval. All rights in the copyrighted work not specifically granted to you are reserved by AIG. All such reserved rights may be exercised by AIG. This Agreement, and all interpretations thereof, shall be deemed to be in accordance with Kentucky law. Any dispute arising out of this Agreement shall be resolved in accordance with Kentucky law in the Circuit Court of Boone County, Kentucky, which court shall be deemed to be the court of proper jurisdiction and venue.

Tuesday, July 31, 2007

How Does GPS Work?

Global Positioning System satellites transmit signals to equipment on the ground. GPS receivers passively receive satellite signals; they do not transmit. GPS receivers require an unobstructed view of the sky, so they are used only outdoors and they often do not perform well within forested areas or near tall buildings. GPS operations depend on a very accurate time reference, which is provided by atomic clocks at the U.S. Naval Observatory. Each GPS satellite has atomic clocks on board.




Each GPS satellite transmits data that indicates its location and the current time. All GPS satellites synchronize operations so that these repeating signals are transmitted at the same instant. The signals, moving at the speed of light, arrive at a GPS receiver at slightly different times because some satellites are farther away than others. The distance to the GPS satellites can be determined by estimating the amount of time it takes for their signals to reach the receiver. When the receiver estimates the distance to at least four GPS satellites, it can calculate its position in three dimensions.
There are at least 24 operational GPS satellites at all times. The satellites, operated by the U.S. Air Force, orbit with a period of 12 hours. Ground stations are used to precisely track each satellite's orbit.

Determining Position

A GPS receiver "knows" the location of the satellites, because that information is included in satellite transmissions. By estimating how far away a satellite is, the receiver also "knows" it is located somewhere on the surface of an imaginary sphere centered at the satellite. It then determines the sizes of several spheres, one for each satellite. The receiver is located where these spheres intersect.



GPS Accuracy
The accuracy of a position determined with GPS depends on the type of receiver. Most hand-held GPS units have about 10-20 meter accuracy. Other types of receivers use a method called Differential GPS (DGPS) to obtain much higher accuracy. DGPS requires an additional receiver fixed at a known location nearby. Observations made by the stationary receiver are used to correct positions recorded by the roving units, producing an accuracy greater than 1 meter.
When the system was created, timing errors were inserted into GPS transmissions to limit the accuracy of non-military GPS receivers to about 100 meters. This part of GPS operations, called Selective Availability, was eliminated in May 2000.

Sunday, July 29, 2007

Mind Over Matter: How Does the Brain Work?




Mind Over Matter: How Does the Brain Work?
Grade Level 9-12
Subject Area life science

Curriculum Focus
animal behavior, psychology
Duration 3-4 hours

Objective
Students will be able to—
1. Ask questions that uncover various aspects of how the brain works.
2. Research previous attempts by scientists to answer these same questions, using both print and online resources.
3. Devise an experiment that sheds some scientific light on the questions.
4. Present their findings to the entire class in the form of an illustrated oral report or multimedia presentation.

Materials
Copies of the student handout for each student; large chart paper and markers or concept-mapping software such as Inspiration; print resources dealing with the neurobiology and psychology of the brain; computer(s) with Internet access; camera (digital or conventional); presentation software such as PowerPoint or HyperStudio (if available); additional materials as necessary for each group’s experiments.

Procedure
In this activity, students will be challenged by some of the greatest scientific mysteries that exist. More than any other part of the human body, the brain raises questions that scientists and psychologists throughout history haven’t yet been able to answer: Why do we retain some memories and lose others? How do our senses affect memory? These and other questions will become the focus of student research during the activity. In small groups, students will tackle one question, focusing on any previous tests and experiments that may have been conducted. Students will then devise and conduct an experiment or series of experiments on their friends, fellow students, and family members that will shed some light on their question. As a finale to the activity, each group will present to the class an illustrated oral report or multimedia presentation detailing its findings.
1. Begin the activity by leading a class discussion about some of the many things we do not yet understand about the human brain. Encourage students to brainstorm a list of questions to which they don’t know the answers. As the discussion unfolds, record these questions on chart paper or by using Inspiration software. The questions they come up with might include the following:
  • Why do we retain some memories and lose others?
  • How do our senses affect memory?
  • How do our brains learn things?
  • What makes one person’s brain good at sculpture and another’s good at chess?
  • Why do some people learn better by hearing things and others by doing things?
  • What changes does the brain undergo as a person ages, and how do these changes affect memory and learning?
  • Why do mnemonic devices help some people remember things?
  • How do a person’s emotions and feelings affect the capacity to learn?
  • Can a computer mimic human intelligence?
  • How does the brain remember things of which it is not aware (such as subliminal advertising)?
  • How much brain does a person need? Why are some surgical patients able to function well with half of their brains removed?
2. Divide your students into groups and ask each group to select a question from the class list to work with. Once each group has chosen a question, the next task is for students to research previous attempts by scientists to answer the question they have chosen, focusing on any psychological tests that have been conducted. They may use available print resources in the classroom, take a trip to the school library, or conduct an Internet search. A list of student-friendly neurobiology sites and links to online experts is available in the Related Resources section below. Remind students to keep close track of bibliographic information for citations, regardless of the media they are using.
3. When their research is complete, ask each group to design an experiment or series of experiments by which they might shed some scientific light on their chosen problem. Before they begin, however, conduct the Listening Comprehension Experiment with your students to give them an idea of the kind of project they should be developing.
4. Set students to work on their experiments. Remind them that their goal should be to develop new methods that have not yet been tried by scientists to the best of their knowledge and research. For example, if a group of students wanted to test whether the sense of smell or sense of touch is more strongly associated with memory, they could design an experiment in which participants are asked to memorize a sequence of 15 different common smells (oranges, peanut butter, dog food, etc.) and 15 common textures (carpeting, silk, water, etc.) Whichever sequence is easier to recall, on average, might indicate which sense is more closely associated with memory.
5. As early as possible at this stage, students need to identify what additional materials they will need for experiments, divide up responsibility for procuring these materials, and make sure they are collected. It will be the group’s responsibility to decide each member’s role in carrying out the actual experiment and recording their findings. If you have a camera available, you can encourage students to document their experiment trials with photos. If you are using a conventional 35-millimeter camera, you can digitize your pictures with a scanner or ask your developer about processing the film on a CD disk or online file. This will make it easier later for students to insert photos into multimedia presentations.
6. When their experiments are designed, ask each group to conduct them on at least 10 different participants, if not more. These participants may consist of family members, friends, and fellow students. Students should be sure to record the results of their experiments carefully and bring them to class.
7. After the experiments have been conducted, the members of each group should compare the results they obtained and attempt to draw a conclusion from their work. Make sure you warn them that not all experiments result in definitive conclusions. When they have collated their results and discussed any possible conclusions, each group should prepare a report for the rest of the class. Their reports may take the form of written descriptions with hand-drawn illustrations or PowerPoint or HyperStudio presentations with digital images. Make sure to leave plenty of time for other students to question the group about their experiments and findings.

Closure
Research and experimentation is fascinating, but how we actually benefit from the findings is the true measure of an experiment’s success. Have students discuss how their research and findings might impact their lives. For example, you might ask the class to analyze how what they have learned about brain-based learning might improve their own academic achievement, and what changes they would have to make in habits or lifestyle to realize those benefits. Alternatively, you can have students write an essay on the same topic.

Extension
1. Before students embark on their own research, invite a research scientist to visit your class to discuss the scientific method and principles of quality research and experimentation. Alternately, you can arrange to make an on-site visit to a research lab in the psychology department of a nearby college or university.
2. Have groups correspond with the neuroscientists on Neuroscience for Kids (see Related Resources below). They can share the details of their experiments and findings, ask for the experts’ interpretation of their data, and request leads for additional sources on their topics.
3. If time permits, encourage students to widen their experiment samples by including subjects of different ages, genders, occupations, and any other relevant categories they can devise. Analyze whether the original results stand up to the different test populations.
4. Suggest that students create an online questionnaire to supplement their experiments. The questionnaire can elicit information related to the experiment. Post it on an educational listserv and let students analyze the results they receive.
5. Discuss the findings of one of the student-designed experiments with another class and challenge them to devise a different experiment that seeks to answer the same question. Ask students to predict whether the second group’s data will support their original results, then reevaluate their findings after the second experiment is conducted.