All Posts (522)

Sort by

A Short History of Hydrogen Sulfide

From the sewers of Paris to physiological messenger

Roger P. Smith from http://www.americanscientist.org/issues/pub/2010/1/a-short-history-of-hydrogen-sulfide/2

Early last year, reports began to emerge in the Southeastern United States of a strange illness. Homeowners reported nosebleeds, sinus irritation and respiratory problems that appeared to be associated with corrosion of copper pipes and air conditioner coils in their houses.

Corroded and non-corroded copper air-conditioner coilsClick to Enlarge ImageThe culprit seems to be drywall imported from China and possibly contaminated with strontium sulfide, an unstable salt that releases hydrogen sulfide on exposure to moisture. It was used widely in the housing boom of 2004–2007, and in the rebuilding efforts after Hurricane Katrina in 2005, when domestic suppliers could not keep up with the demand. The Consumer Products Safety Commission is now investigating whether sulfide gases given off by the drywall, including hydrogen sulfide, are to blame. The Florida Department of Health maintains a Web site with information for consumers. Lawsuits abound, and many who are able to do so have moved out of their homes. Several estimates place the number of affected houses at 100,000.

To those experiencing or investigating this phenomenon early on, it seemed bizarre. But in fact, this is just the latest chapter in the history of a chemical whose effects were first noted in the 16th century. And there is still more to learn about its role in the human body. Recent research offers insights into its biochemical actions as well as some intriguing suggestions for medical uses.

Occupational Exposures

In 1713, a remarkable Italian physician named Bernardino Ramazzini published De Morbis Artificum, or Diseases of Workers. In Chapter 14, titled “Diseases of Cleaners of Privies and Cesspits,” he describes a painful inflammation of the eyes which was common among such workers. The inflammation often led to secondary bacterial invasion, and sometimes to total blindness. Displaying amazing insight, Ramazzini hypothesized that when the cleaners disturbed the excrement in the course of their work, an unknown volatile acid was produced, which was irritating to the eyes. It was also at least partially responsible for the odor of excrement, and it is now known to be generated wherever organic matter undergoes putrefaction.

Ramazzini further postulated that that same acid was causing copper and silver coins which the workers had in their pockets to turn black on their surfaces—an eerie resonance with the phenomena recently observed by U.S. homeowners. Around 1777, a series of accidents—some of them fatal—began to occur in Paris due to a gas emanating from its sewer system. The commission appointed to study the cases made its findings public in 1785. The report described two distinct types of poisonings: a mild form involving inflammation of the eyes and mucous membranes as already described by Ramazzini, and a severe form that was characterized by a fulminating (rapidly developing) asphyxia. It is little wonder that the French Romantic writer Victor Hugo (1802–1885) referred to the Parisian sewers as “the intestine of the Leviathan.” Many years were to pass, however, before chemical analyses would establish the presence of hydrogen sulfide in the sewers and implicate it as the cause of the accidents.

The history of exposures has focused on sewers and the workplace, but the corrosive effects of hydrogen sulfide are common knowledge in Rotorua, New Zealand, which was built over centuries in a geothermally active area. The constant exposure to low, environmental levels of hydrogen sulfide produces such damage even as residents enjoy spas, indoor heating and cooking with the hot gases.

Today, the American Conference of Industrial Hygienists has set the so-called threshold limit value for its presence in the workplace at 10 parts per million (ppm) of hydrogen sulfide in air for eight hours a day, five days a week over a working lifetime. The U.S. National Institute for Occupational Safety and Health estimated in 1977—some 200 years after the Paris accidents—that 125,000 workers in at least 77 occupations, including drilling for petroleum, tanneries and the paper industry, may be at risk of exposure to hydrogen sulfide.

Chemical Discovery

Around 1750, a humble young Swede beginning his career as an apothecary was fortunate to have a series of very understanding mentors who allowed him considerable free time for reading and experimentation. His name was Carl Wilhelm Scheele, and he turned out to be a gifted chemist. Like many chemists before and after him, Scheele seems to have given little thought to the biological effects of the materials with which he worked. One day while distilling potassium ferrocyanide with sulfuric acid, he noted a “strong, peculiar and not unpleasant odor.” He brought himself to taste this gas and described it as “slightly on sweet [sic] and somewhat heating on the mouth.” Today we describe the odor as that of bitter almonds and call the gas hydrogen cyanide. Scheele may have been fortunate to have escaped with his life.

Perhaps a guardian angel was with him again on the day that he treated ferrous sulfide (pyrite, or fool’s gold) with a mineral acid. He called the rank odor that resulted Schwefelluft (sulfur air) and referred to it as stinkende (stinking or fetid). Today we refer to the odor as that of rotten eggs. His patron, the Swedish chemist and mineralogist Torbern Olof Bergman, also demonstrated its presence in some mineral springs. The publication date for these original observations was 1777—around the time of the Paris accidents. The fact that the same man discovered both hydrogen sulfide and hydrogen cyanide was the start of a long series of coincidences and discoveries about the two chemicals that would uncover their similarities.

Biological Effects

Investigations on the biological effects of hydrogen sulfide began around the turn of the 20th century. François Chaussier, François Magendie, Claude Bernard and Felix Hoppe-Seyler were among the well-known scientists of the day who labored in that vineyard.

In experiments with dogs, marked differences in the effects of hydrogen sulfide were noted with only small changes in its concentration in the air they were breathing. Fifty ppm, which was considered a minimally lethal concentration at the time, resulted in a slight progressive depression in the rate and depth of respirations. After many hours of exposure, the dogs died from a type of pulmonary edema, acute respiratory distress syndrome (ARDS). When that concentration was doubled to 100 ppm, death resulted in 15 to 20 minutes. In these cases the respiration was stimulated almost immediately, this progressed to a pronounced hyperpnea (deep breathing), and death in apnea followed. At 300 ppm, respiratory arrest occurred after a few violent gasps. The same effects on respiration, with the exception of ARDS, were known to occur with injected or inhaled hydrogen cyanide. Mice may be more resistant to the effects of hydrogen sulfide. In a 1964 experiment, they survived for 10 minutes in an atmosphere of 1,872 ppm and for 20 minutes at 985 ppm.

Partial skeleton of “Ardi,” from the species Ardipithecus ramidusClick to Enlarge ImageA more complete explanation of the respiratory stimulant effects of hydrogen cyanide and hydrogen sulfide had to await the discovery of the chemoreceptor function of the carotid body (shown at right), and the reflex effects that follow the activation of those receptors. The respiratory stimulant effects of cyanide and sulfide could be completely abolished by severing the sinus nerve and thus denervating the carotid sinus. In that case, larger doses of either chemical resulted only in respiratory depression, which was presumably mediated via the brainstem. Similarly, injections of sulfide or cyanide into the internal carotid or vertebral arteries also failed to stimulate respiration, since in those cases the chemicals reached the carotid sinus only after dilution in the general circulation. When innervation of the sinus was intact, the hyperpnea was accompanied by a fleeting rise in systemic blood pressure, and sometimes by a slowed heart rate (bradycardia). The cardiovascular effects varied with the injection site and the species and are still not adequately explained.

A Persistent False Trail

The great German biochemist Hoppe-Seyler became famous for his discovery of the abnormal form of hemoglobin known as methemoglobin, in which some or all of the heme irons have been oxidized to the ferric form. This reaction is readily mediated both in vivo and in vitro by sodium nitrite. Methemoglobin cannot reversibly combine with oxygen, and the disruption of the oxygen-transport function of the blood can result in hypoxia and death.

In 1863, Hoppe-Seyler passed a stream of pure hydrogen sulfide through a sample of human blood and claimed to have observed a greenish pigment that was associated with shifts in the visible absorption spectrum of hemoglobin. Although he was aware that he had probably produced a mixture containing unstable and denatured products, which resulted in turbidity and precipitation and made the absorption spectra suspect, he still thought that the mixture contained a new form of hemoglobin. He called it sulfhemoglobin and thereby launched one of the most confused areas in hematology. It led to the hypothesis some still subscribe to—namely, that hydrogen sulfide is a blood poison like sodium nitrite and carbon monoxide. No matter that animal experiments clearly demonstrated that it was a respiratory toxin, or that no such pigment has ever been identified in the blood of animals or humans fatally poisoned with hydrogen sulfide. Sulfhemoglobin generated by hydrogen sulfide appears to be a strictly in vitro phenomenon, and it has yet to be prepared in pure form.

Sulfmethemoglobin

No less a scientist than Linus Pauling and his associates described the magnetic properties of another blood pigment, which they called sulfmethemoglobin. This pigment is easily prepared in pure form by mixing hydrogen sulfide with methemoglobin, and it is chemically analogous to cyanmethemoglobin, in which the cyanide ion is bound to ferric irons of heme. The hydrosulfide anion (HS) also binds to ferric heme, albeit not quite so tenaciously as cyanide. Indeed, this reaction has been exploited medically as an antidote to cyanide poisoning. One can deliberately inject sodium nitrite intravenously to generate a tolerable level of methemoglobin. The methemoglobin will temporarily bind free cyanide as the inactive complex cyanmethemoglobin. Over time the cyanide is slowly released, at a rate at which the body’s natural detoxification mechanisms can deal with it.

At least three laboratories have demonstrated that the same principles can be applied to hydrogen sulfide poisoning and that induced methemoglobinemia can indeed be lifesaving. At least a half dozen successful human resuscitations have been reported in the literature. The odds against its successful application, however, are high. Few poisons are more rapidly acting than inhaled hydrogen sulfide, and inhalation is invariably the route of exposure. Sulfide poisoning tends to occur in remote locations, and there is seldom a medically qualified individual on the scene who is prepared with a parenteral form of nitrite and trained to make intravenous injections. Most successful resuscitations from cyanide poisoning have occurred in individuals who ingested soluble salts of cyanide, where absorption is delayed.

Chemical Similarities Between Cyanide and Sulfide

In addition to the similarities in their physiological effects, cyanide and sulfide have chemical similarities. The undissociated forms of both hydrogen cyanide and hydrogen sulfide are flammable, volatile gases. Hydrogen sulfide (whose vapor density, or d, is 1.19) is heavier than air (d = 1.0) whereas hydrogen cyanide (d = 0.941) is lighter. Both are weak acids with acid dissociation constants (pKa) that are of some physiological significance: hydrogen cyanide 9.2–9.3 and hydrogen sulfide 7.04. Both form salts with sodium and potassium as well as with some alkaline earths. Both anions bind to methemoglobin as noted above, and each of those complexes has its distinct visible absorption spectrum. And both are inhibitors of cytochrome coxidase, the terminal enzyme in the electron transport chain that reacts with molecular oxygen in aerobic organisms. Blockage of that key enzyme is believed to be the mechanism of action in the rapidly lethal form of cyanide and sulfide poisonings.

New Biological Roles

Inhibition of cytochrome c oxidase results in a decrease in oxidative phosphorylation (the metabolic pathway that produces ATP). This in turn lowers the metabolic rate and body temperature in mice. These phenomena accompany states of suspended animation. When mice were exposed to hydrogen sulfide at concentrations of 80 ppm, dramatic effects were observed in the first 5 minutes. By 6 hours, their metabolic rate had dropped by 90 percent and body temperature to 2 degrees Celsius above ambient temperature. When the mice were then returned to room air, the metabolic rate and temperature returned to normal with no detectable behavioral or functional deficits. As already noted, lethal levels for hydrogen sulfide in mice are much higher than for dogs, but this state in mice must also occur over a fairly narrow range of concentrations.

Finally, and most astonishingly of all, experimental evidence contributed in 2008 indicates that like carbon monoxide and nitric oxide, hydrogen sulfide is an important signaling molecule in biology, and it may find a role in medicine. It is physiologically generated in mice by cystathionine γ-lyase, and genetic deletion of that enzyme markedly reduces hydrogen sulfide levels in the serum, heart, aorta and other tissues. Mutant mice lacking the enzyme have marked hypertension and diminished endothelium-dependent vasorelaxation, consistent with an important vasodilator role for hydrogen sulfide. The enzyme is physiologically activated by the calcium-binding protein calmodulin, which is a mechanism for hydrogen sulfide formation in response to vascular activation. Thus, hydrogen sulfide appears to be a physiologic vasodilator and regulator of blood pressure. Its relative contribution vis-à-vis the similar nitric oxide is not yet clear.

What a strange and wondrous journey this odiferous and violently toxic chemical, associated with the excrement of humanity, has led us on for five centuries. It’s a history that could fill a book, one that covers a vast range of territory, from the search to determine the cause of workplace injuries to fascinating discoveries about how hydrogen sulfide interacts with chemoreceptors in the body. And for all the false leads, in the end it may yet turn out to have some useful applications in medicine—even if only a new Viagra.

Bibliography

  • Blackstone, E., M. Morrison and M. B. Roth. 2005. H2S induces a suspended animation—like state in mice. Science 308:518.
  • CPSC/EPA/HUD/CDC/ATSDR Press Statement on Initial Chinese Drywall Studies. October 2009.http://www.cpsc.gov/info/drywall/oct2009statement.pdf. Accessed November 17, 2009.
  • Coryell, C. D., F. Stitt and L. Pauling. 1937. The magnetic properties and structure of ferrihemoglobin (methemoglobin) and some of its compounds.Journal of the American Chemical Society 59:633–642.
  • d’Emmanuele di Villa Bianca, R., et al. 2009. Hydrogen sulfide as a mediator of human corpus cavernosum smooth-muscle relaxation. Proceedings of the National Academy of Sciences of the USA 106(11):4513–4518 .
  • Hugo, V. 1938. Les Miserables. Translated by L. Wraxall, illustrations by L. Ward. New York: The Heritage Press.
  • Mellor, J. W. 1930. A Comprehensive Treatise on Inorganic and Theoretical Chemistry. Vol. X. Hydrogen Sulfide. London: Longmans, Green and Co.
  • Mitchell, C. W., and S. J. Davenport. 1924. Hydrogen sulphide literature. Public Health Reports 39:1–13.
  • Partington, J. R. 1962. A History of Chemistry, Vol. 3, Chapter VI. Chemistry in Scandinavia. II. Scheele. London: Macmillan and Co., Ltd.
  • Ramazzini, B. 1940. Diseases of Workers. The Latin text of 1713, revised, with translations and notes by W. C. Wright. Chicago: University of Chicago Press.
  • Smith, L., H. Kruszyna and R. P. Smith. 1977. The effect of methemoglobin on the inhibition of cytochrome c oxidase by cyanide, sulfide or azide.Biochemical Pharmacology 26:2247–2250.
  • Smith, R. P., and R. E. Gosselin. 1964. The influence of methemoglobinemia on the lethality of some toxic anions. II. Sulfide. Toxicology and Applied Pharmacology 6:584–592.
  • Smith, R. P. 1979. 4. Effects on Animals. In Hydrogen Sulfide, A Report by the Subcommittee on Hydrogen Sulfide (chaired by R. P. Smith) of the Committee on Medical and Biologic Effects of Environmental Pollutants, Assembly of Life Sciences, National Research Council, National Academy of Sciences, Washington, D. C., 1977. Reprinted by University Park Press, Baltimore, 1979.
  • Smith, R. P. 1996. Toxic responses of the blood. In Casarett and Doull’s Toxicology: The Basic Science of Poisons, 5th edition, edited by C. D. Klaassen. New York: Macmillan. pp. 335–354.
  • Wayne, Leslie. 2009. Chinese Drywall Found to Differ Chemically. The New York Times, October 29, 2009.
  • Yang, G., et al. 2008. H2S as a physiologic vasorelaxant: Hypertension in mice with deletion of cystathionine γ-lyase. Science 322:587–590.

Reblog this post [with Zemanta]
Read more…
Comment: A really nice water analogy for the field properties Divergence, Curl and Gradient from the Blog Starts With a Bang

....it's pretty mathematically intensive, but what's missing from most textbooks and E&M courses are physical explanations of what the mathematics means. For instance, I've started teaching about fields, and pretty much every textbook out there goes on and on about the properties of fields. They say you can do three things to fields, take the gradient, divergence, or curl of them.

(Are you asleep yet? I'm sorry!)

What do these things mean? An easy way to picture it is in terms of water. If you placed a drop of water anywhere on, say, Earth, the magnitude and direction of how it rolls down is the gradient of the Earth's elevation.

rolling_water_l.jpg

If you let that drop of water flow, as it goes downhill, it can either spread out or converge to a narrower stream. When we quantify that, that's what the divergence of the field is.

wpks8.jpg

And finally, when that water is flowing, sometimes it gets an internal rotational motion, like an eddy. A measure of that rotational motion is called the curl of the field.

1253073640_2c73b4f8d5.jpg

Well, one math geek statement is as follows: the curl of the gradient of a scalar field is always zero. What does this mean, in terms of our water? It means that I can take any topography I can find, invent, or even dream up.

topography_western.us.lg.jpg

I can drop a tiny droplet of water on it anywhere I like, and while the water may roll downhill (depending on the gradient), and while the water may spread out or narrow (depending on the divergence of the gradient), it will not start to rotate. For rotation to happen, you need something more than just a drop starting out on a hill, no matter how your hill is shaped! That's what it means when someone says, "The curl of the gradient is zero."



Reblog this post [with Zemanta]
Read more…
A garden lawn

Image via Wikipedia

ScienceDaily (Jan. 22, 2010) — Dispelling the notion that urban "green" spaces help counteract greenhouse gas emissions, new research has found -- in Southern California at least -- that total emissions would be lower if lawns did not exist.


Turfgrass lawns help remove carbon dioxide from the atmosphere through photosynthesis and store it as organic carbon in soil, making them important "carbon sinks." However, greenhouse gas emissions from fertilizer production, mowing, leaf blowing and other lawn management practices are four times greater than the amount of carbon stored by ornamental grass in parks, a UC Irvine study shows. These emissions include nitrous oxide released from soil after fertilization. Nitrous oxide is a greenhouse gas that's 300 times more powerful than carbon dioxide, the Earth's most problematic climate warmer.

"Lawns look great -- they're nice and green and healthy, and they're photosynthesizing a lot of organic carbon. But the carbon-storing benefits of lawns are counteracted by fuel consumption," said Amy Townsend-Small, Earth system science postdoctoral researcher and lead author of the study, forthcoming in the journal Geophysical Research Letters. The research results are important to greenhouse gas legislation being negotiated. "We need this kind of carbon accounting to help reduce global warming," Townsend-Small said. "The current trend is to count the carbon sinks and forget about the greenhouse gas emissions, but it clearly isn't enough."

Turfgrass is increasingly widespread in urban areas and covers 1.9 percent of land in the continental U.S., making it the most common irrigated crop.

In the study, Townsend-Small and colleague Claudia Czimczik analyzed grass in four parks near Irvine, Calif. Each park contained two types of turf: ornamental lawns (picnic areas) that are largely undisturbed, and athletic fields (soccer and baseball) that are trampled and replanted and aerated frequently.

The researchers evaluated soil samples over time to ascertain carbon storage, or sequestration, and they determined nitrous oxide emissions by sampling air above the turf. Then they calculated carbon dioxide emissions resulting from fuel consumption, irrigation and fertilizer production using information about lawn upkeep from park officials and contractors.

The study showed that nitrous oxide emissions from lawns were comparable to those found in agricultural farms, which are among the largest emitters of nitrous oxide globally.

In ornamental lawns, nitrous oxide emissions from fertilization offset just 10 percent to 30 percent of carbon sequestration. But fossil fuel consumption for management, the researchers calculated, released about four times more carbon dioxide than the plots could take up. Athletic fields fared even worse, because -- due to soil disruption by tilling and resodding -- they didn't trap nearly as much carbon as ornamental grass but required the same emissions-producing care.

"It's impossible for these lawns to be net greenhouse gas sinks because too much fuel is used to maintain them," Townsend-Small concluded.

Previous studies have documented lawns storing carbon, but this research was the first to compare carbon sequestration to nitrous oxide and carbon dioxide emissions from lawn grooming practices.

The UCI study was supported by the Kearney Foundation of Soil Science and the U.S. Department of Agriculture.


Freshly mowed grass. Turfgrass lawns help remove carbon dioxide from the atmosphere through photosynthesis and store it as organic carbon in soil, making them important "carbon sinks." However, greenhouse gas emissions from fertilizer production, mowing, leaf blowing and other lawn management practices are four times greater than the amount of carbon stored by ornamental grass in parks. (Credit: iStockphoto/Nicholas Campbell)
University of California - Irvine (2010, January 22). Urban 'green' spaces may contribute to global warming. ScienceDaily. Retrieved January 23, 2010, from http://www.sciencedaily.com/releases/2010/01/100119133515.htm

Reblog this post [with Zemanta]
Read more…
From Science Daily

ScienceDaily (Jan. 23, 2010) — Researchers have discovered that some of the most fundamental assumptions about how water moves through soil in a seasonally dry climate such as the Pacific Northwest are incorrect -- and that a century of research based on those assumptions will have to be reconsidered.


A new study by scientists from Oregon State University and the Environmental Protection Agency showed -- much to the surprise of the researchers -- that soil clings tenaciously to the first precipitation after a dry summer, and holds it so tightly that it almost never mixes with other water.

The finding is so significant, researchers said, that they aren't even sure yet what it may mean. But it could affect our understanding of how pollutants move through soils, how nutrients get transported from soils to streams, how streams function and even how vegetation might respond to climate change.

The research was just published online in Nature Geoscience, a professional journal.

"Water in mountains such as the Cascade Range of Oregon and Washington basically exists in two separate worlds," said Jeff McDonnell, an OSU distinguished professor and holder of the Richardson Chair in Watershed Science in the OSU College of Forestry. "We used to believe that when new precipitation entered the soil, it mixed well with other water and eventually moved to streams. We just found out that isn't true."

"This could have enormous implications for our understanding of watershed function," he said. "It challenges about 100 years of conventional thinking."

What actually happens, the study showed, is that the small pores around plant roots fill with water that gets held there until it's eventually used up in plant transpiration back to the atmosphere. Then new water becomes available with the return of fall rains, replenishes these small localized reservoirs near the plants and repeats the process. But all the other water moving through larger pores is essentially separate and almost never intermingles with that used by plants during the dry summer.

The study found in one test, for instance, that after the first large rainstorm in October, only 4 percent of the precipitation entering the soil ended up in the stream -- 96 percent was taken up and held tightly by soil around plants to recharge soil moisture. A month later when soil moisture was fully recharged, 55 percent of precipitation went directly into streams. And as winter rains continue to pour moisture into the ground, almost all of the water that originally recharged the soil around plants remains held tightly in the soil -- it never moves or mixes.

"This tells us that we have a less complete understanding of how water moves through soils, and is affected by them, than we thought we did," said Renee Brooks, a research plant physiologist with the EPA and courtesy faculty in the OSU Department of Forest Ecosystems and Society.

"Our mathematical models of ecosystem function are based on certain assumptions about biological processes," Brooks said. "This changes some of those assumptions. Among the implications is that we may have to reconsider how other things move through soils that we are interested in, such as nutrients or pollutants."

The new findings were made possible by advances in the speed and efficiency of stable isotope analyses of water, which allowed scientists to essentially "fingerprint" water and tell where it came from and where it moved to. Never before was it possible to make so many isotopic measurements and get a better view of water origin and movement, the researchers said.

The study also points out the incredible ability of plants to take up water that is so tightly bound to the soil, with forces nothing else in nature can match.

The research was conducted in the H.J. Andrews Experimental Forest near Blue River, Ore., a part of the nation's Long Term Ecological Research, or LTER Program. It was supported by the EPA.


Much to the surprise of the researchers, soil clings tenaciously to the first precipitation after a dry summer, and holds it so tightly that it almost never mixes with other water. (Credit: iStockphoto/Mats Lund)
Oregon State University (2010, January 23). Water hits and sticks: Findings challenge a century of assumptions about soil hydrology.ScienceDaily. Retrieved January 23, 2010, from http://www.sciencedaily.com/releases/2010/01/100121173452.htm

Reblog this post [with Zemanta]
Read more…

Last Decade Was Warmest on Record, 2009 One of Warmest Years, NASA Research Finds

ScienceDaily (Jan. 22, 2010) — A new analysis of global surface temperatures by NASA scientists finds the past year was tied for the second warmest since 1880. In the Southern Hemisphere, 2009 was the warmest year on record.


Although 2008 was the coolest year of the decade because of a strong La Nina that cooled the tropical Pacific Ocean, 2009 saw a return to a near-record global temperatures as the La Nina diminished, according to the new analysis by NASA's Goddard Institute for Space Studies (GISS) in New York. The past year was a small fraction of a degree cooler than 2005, the warmest on record, putting 2009 in a virtual tie with a cluster of other years --1998, 2002, 2003, 2006, and 2007 -- for the second warmest on record.

"There's always interest in the annual temperature numbers and a given year's ranking, but the ranking often misses the point," said James Hansen, GISS director. "There's substantial year-to-year variability of global temperature caused by the tropical El Nino-La Nina cycle. When we average temperature over five or ten years to minimize that variability, we find global warming is continuing unabated."

January 2000 to December 2009 was the warmest decade on record. Looking back to 1880, when modern scientific instrumentation became available to monitor temperatures precisely, a clear warming trend is present, although there was a leveling off between the 1940s and 1970s.

In the past three decades, the GISS surface temperature record shows an upward trend of about 0.36 degrees F (0.2 degrees C) per decade. In total, average global temperatures have increased by about 1.5 degrees F (0.8 degrees C) since 1880.

"That's the important number to keep in mind," said GISS climatologist Gavin Schmidt. "The difference between the second and sixth warmest years is trivial because the known uncertainty in the temperature measurement is larger than some of the differences between the warmest years."

The near-record global temperatures of 2009 occurred despite an unseasonably cool December in much of North America. High air pressures from the Arctic decreased the east-west flow of the jet stream, while increasing its tendency to blow from north to south. The result was an unusual effect that caused frigid air from the Arctic to rush into North America and warmer mid-latitude air to shift toward the north. This left North America cooler than normal, while the Arctic was warmer than normal.

"The contiguous 48 states cover only 1.5 percent of the world area, so the United States' temperature does not affect the global temperature much," Hansen said.

GISS uses publicly available data from three sources to conduct its temperature analysis. The sources are weather data from more than a thousand meteorological stations around the world, satellite observations of sea surface temperatures, and Antarctic research station measurements.

Other research groups also track global temperature trends but use different analysis techniques. The Met Office Hadley Centre in the United Kingdom uses similar input measurements as GISS, for example, but it omits large areas of the Arctic and Antarctic where monitoring stations are sparse.

Although the two methods produce slightly differing results in the annual rankings, the decadal trends in the two records are essentially identical.

"There's a contradiction between the results shown here and popular perceptions about climate trends," Hansen said. "In the last decade, global warming has not stopped."

For more information about GISS's surface temperature record, visit: http://data.giss.nasa.gov/gistemp/

For related video and still images, visit: http://svs.gsfc.nasa.gov/goto?010557


The map shows temperature changes for the last decade--January 2000 to December 2009--relative to the 1951-1980 mean. Warmer areas are in red, cooler areas in blue. The largest temperature increases occurred in the Arctic and a portion of Antarctica. (Credit: NASA)

NASA (2010, January 22). Last decade was warmest on record, 2009 one of warmest years, NASA research finds. ScienceDaily. Retrieved January 22, 2010, from http://www.sciencedaily.com­ /releases/2010/01/100121170717.htm#

Reblog this post [with Zemanta]
Read more…

Non Linear Term in the Saint Venant Equation of SWMM 5

 

The flow equation has six components that have to be in balance at each time step:

1. The unsteady flow term or dQ/dt

2. The friction loss term (normally based on Manning's equation except for full force mains),

3. The bed slope term or dz/dx

4. The water surface slope term or dy/dx,

5. The non linear term or d(Q^2/A)/dx and

6. The entrance, exit and other loss terms.

All of these terms have to add up to zero at each time step. If the water surface slope becomes zero or negative then the only way the equation can be balanced is for the flow to decrease. If the spike is due to a change in the downstream head versus the upstream head then typically you will a dip in the flow graph as the water surface slope term becomes flat or negative, followed by a rise in the flow as the upstream head increases versus the downstream head.

You get more than the normal flow based on the head difference because in addition to the head difference you also get a push from the non linear terms or dq3 and dq4 in this graph.


Reblog this post [with Zemanta]
Read more…

Knowdege is Out, Focus is In

From Edge Magazine - http://edge.org/q2010/q10_16.html#dalrymple

DAVID DALRYMPLE

Researcher, MIT Mind Machine Project

KNOWLEDGE IS OUT, FOCUS IS IN, AND PEOPLE ARE EVERYWHERE

Filtering, not remembering, is the most important skill for those who use the Internet. The Internet immerses us in a milieu of information — not for almost 20 years has a Web user read every available page — and there's more each minute: Twitter alone processes hundreds of tweets every second, from all around the world, all visible for anyone, anywhere, who cares to see. Of course, the majority of this information is worthless to the majority of people. Yet anything we care to know — what's the function for opening files in Perl? how far is it from Hong Kong to London? what's a power law? — is out there somewhere.

I see today's Internet as having three primary, broad consequences: 1) information is no longer stored and retrieved by people, but is managed externally, by the Internet, 2) it is increasingly challenging and important for people to maintain their focus in a world where distractions are available anywhere, and 3) the Internet enables us to talk to and hear from people around the world effortlessly.

Before the Internet, most professional occupations required a large body of knowledge, accumulated over years or even decades of experience. But now, anyone with good critical thinking skills and the ability to focus on the important information can retrieve it on demand from the Internet, rather than her own memory. On the other hand, those with wandering minds, who might once have been able to focus by isolating themselves with their work, now often cannot work without the Internet, which simultaneously furnishes a panoply of unrelated information — whether about their friends' doings, celebrity news, limericks, or millions of other sources of distraction. The bottom line is that how well an employee can focus might now be more important than how knowledgeable he is. Knowledge was once an internal property of a person, and focus on the task at hand could be imposed externally, but with the Internet, knowledge can be supplied externally, but focus must be forced internally.

Separable from the intertwined issues of knowledge and focus is the irrelevance of geography in the Internet age. On the transmitting end, the Internet allows many types of professionals to work in any location — from their home in Long Island, from their condo in Miami, in an airport in Chicago, or even in flight on some airlines — wherever there's an Internet connection. On the receiving end, it allows for an Internet user to access content produced anywhere in the world with equal ease. The Internet also enables groups of people to assemble based on interest, rather than on geography — collaboration can take place between people in Edinburgh, Los Angeles, and Perth nearly as easily as if they lived in neighboring cities.

In the future, these trends will continue, with the development of increasingly subconscious interfaces. Already, making an Internet search is something many people do without thinking about it, like making coffee or driving a car. Within the next 50 years, I expect the development of direct neural links, making the data that's available at our fingertips today available at our synapses in the future, and making virtual reality actually feel more real than traditional sensory perception. Information and experience could be exchanged between our brains and the network without any conscious action. And at some point, knowledge may be so external, all knowledge and experience will be shared universally, and the only notion of an "individual" will be a particular focus — a point in the vast network that concerns itself only with a specific subset of the information available.

In this future, knowledge will be fully outside the individual, focus will be fully inside, and everybody's selves will truly be spread everywhere.

Reblog this post [with Zemanta]
Read more…

The Power of First Experiences

Heartbreak and Home Runs: The Power of First Experiences

Reblog this post [with Zemanta]
Read more…
A link to a good discussion of the validation of simulation models http://www.ejbrm.com/vol4/v4-i1/Martis.pdf Validation of Simulation Based Models: A Theoretical Outlook Morvin Savio Martis Manipal Institute of Technology, India oceanmartis@yahoo.com Abstract: Validation is the most incomprehensible part of developing a model. Nevetheless, no model can be accepted unless it has passed the tests of validation, since the procedure of validation is vital to ascertain the credibility of the model. Validation procedures are usually framework based and dynamic, but a methodical procedure can be followed by a modeller (researcher) in order to authenticate the model. The paper starts with a discussion on the views and burning issues by various researchers on model validation and the foundational terminology involved. The paper later highlights on the methodology and the process of validation adopted. Reasons for the failure of the model have also been explored. The paper finally focuses on the widely approved validation schemes (both quantitative and qualitative) and techniques in practice, since no one test can determine the credibility and validity of a simulation model. Moreover, as the model passes more tests (both quantitative and qualitative) the confidence in the model increases correspondingly. Keywords: Validation, simulation, dynamic models, validation schemes, validation process, modelling
Reblog this post [with Zemanta]
Read more…

Weather Undergound Data and SWMM 5

Weather Underground is a site that provides excellent local weather information in the form of graphs, tables and csv files. You can use the data very easily in SWMM 5 by copying from Excel to a time series in SWMM 5.



The data imported from the csv file to Excel and after the text to columns tool is used looks like this in Excel. The data is now ready to be imported into SWMM 5 after the time column is adjusted to fall on even 5 minute intervals. In Excel you can use the formula @ROUND((B2)/"0:05:00",0)*"0:05:00" to round all of the time values to 5 minutes. If you do not do this step then you will have problems in SWMM 5 due to the rainfall interval not being equal to the defined raingage interval.

Open up and make a new time series in SWMM 5 and then copy and paste the date, rounded time column and rainfall column into the SWMM 5 time series.


Reblog this post [with Zemanta]
Read more…

InfoSWMM and H2OMAP SWMM Version 8.0

MWH Soft Releases InfoSWMM Version 8.0, Redefining Standards in Practicality and Productivity Latest Iteration of Leading GIS-Centric Urban Drainage Modeling and Design Software Sets New Standard for Ease, Power, Speed, Functionality and Performance Broomfield, Colorado USA, July 22, 2009 — In its ongoing quest to equip the global wastewater industry with the world’s most comprehensive and innovative GIS-centric modeling and design solutions, MWH Soft, a leading global provider of environmental and water resources applications software, today announced the worldwide availability of the V8 Generation of its industry-leading InfoSWMM for ArcGIS (ESRI, Redlands, CA). The latest release marks the most significant milestone to date in the evolution of the company’s flagship urban drainage modeling and design product, firmly establishing it as the number one choice for the effective evaluation, design, management, rehabilitation and operation of wastewater and stormwater collection systems. It provides unmatched benefits with an unprecedented combination of power, functionality, seamless GIS integration, and ease of use. Underlining MWH Soft’s leadership in the wastewater industry, InfoSWMM reflects the company’s ongoing commitment to delivering pioneering technology that raises the bar for urban drainage network modeling, helping to shape the future of this critical sector. The full-featured urban drainage network analysis and design program delivers the highest rate of return in the industry, and is the world’s first and only urban drainage modeling solution certified by the National Association of GIS-Centric Software (www.nagcs.com). All operations of a typical sewer system — from analysis and design to management functions such as water quality assessment, pollution prediction, sediment transport, urban flooding, real-time control and record keeping — are addressed in a single, fully integrated geoengineering environment whose powerful hydraulic computational engine is endorsed by the USEPA and certified by FEMA. Focused on expanded geospatial functions and performance, V8 features a host of unique new capabilities to help wastewater engineers and planners develop better designs and operational improvements faster and more efficiently. They include an intuitive, time-saving user interface; impressive graphics; greatly accelerated creation of better, more accurate models; and more advanced design analysis capabilities than any other wastewater modeling software. V8 gives users unprecedented power in managing urban runoff and wet weather water quality problems in combined, sanitary and storm sewers; optimizing BMP and LID designs; and meeting SSO and CSO regulations. Multiple hydrology and infiltration methods coupled with highly sophisticated Real-Time Control (RTC) schemes optimize the operational management of wastewater systems and hydraulic structures. Unparalleled performance modeling sets new benchmarks in scalability, reliability, functionality and flexibility within the powerful ArcGIS environment. InfoSWMM V8 addresses scores of customer-requested enhancements and new features, plus significant innovations that break new ground in productivity and efficiency for engineering GIS modeling. Key new modeling tools include the ability to accurately simulate the transport and gravitational settling of sediments (waste solids) over time throughout the sewer collection system under varying hydraulic conditions, the option of designating specific conduits as culverts and compute their inlet control flow under dynamic wave flow routing, and an optional baseline time pattern for external inflows at nodes that can be used to apply a periodic adjustment to the baseline inflow value (e.g., by month of year, day of week, etc.). Other introductions include a new analytical CSTR solution for storage unit mass balance, support of subcatchment depression storage for SCS hydrology, the addition of two new types of time conditions (month of the year and day of the week) that can be used in any real-time control rule condition clause, and significantly improved data processing and computational speed. Users can now choose to ignore any combination of rainfall/runoff, snowmelt, groundwater, flow routing, and water quality process models when running a simulation, resulting in even faster simulation run times. The software is also fully compatible with the latest release of EPA SWMM5 (5.0.016) and can be used to generate more complete tabular results, statistics reports, and summary tables for all network components. InfoSWMM is quickly becoming the must-have solution for comprehensive enterprise-wide geospatial urban drainage and sewer systems engineering. With its intuitive GIS-centric working environment and new cutting-edge features, Generation V8 delivers unmatched capabilities to the industry, backed by unparalleled technical support. “InfoSWMM V8 contains an array of new features and enhanced capabilities for both new and existing customers,” said Paul F. Boulos, Ph.D, President and Chief Operating Officer of MWH Soft. “It reflects our company’s continued commitment to consistently deliver best-in-class sewer modeling and design technology.” “We’ve combined the best technological innovations of MWH Soft’s R&D with the requests of our customers to produce one of the most significant product releases in our history in terms of usability and performance,” said Boulos. “This marks the pinnacle in a series of product innovations that have set InfoSWMM far apart from its competitors. The level of innovation in V8 is typical of our entire product portfolio, which is unequaled in breadth, depth and best-in-class solutions. These advances empower our customers to increase productivity, improve engineering quality and optimize system performance and designs. Because they can be accessed by any user, regardless of technical expertise, these benefits can be extended across the entire enterprise. We are thrilled to make this one-of-a-kind product available to the extended wastewater and urban drainage modeling communities.” Pricing and availability Upgrade to InfoSWMM V8 is now available worldwide by subscription to the Gold, Platinum or Executive program. Subscription members can immediately download the new version free of charge directly from www.mwhsoft.com. The MWH Soft Subscription Program is a friendly customer support and software maintenance program that ensures the longevity and usefulness of MWH Soft products. It gives subscribers instant access to new functionality as it is developed, along with automatic software updates and upgrades. For the latest information on the MWH Soft Subscription Program, visit www.mwhsoft.com or contact your local MWH Soft Certified Representative. About MWH Soft MWH Soft is a leading global provider of technical and infrastructure software and professional solutions designed to meet the technological needs of utilities, government industries, and engineering organizations worldwide. Its clients include the majority of the largest North American cities and ENR top design firms. The multifaceted, state-of-the-art CAD, GIS and Internet enabled products created by MWH Soft empower thousands of engineers across the globe to competitively plan, manage, design, protect, maintain and operate highly efficient and reliable infrastructure systems. For more information, call MWH Soft at (626) 568-6868, or visit www.mwhsoft.com. MWH Soft Contact Adam J. Simonsen Director of Marketing Adam.Simonsen@mwhsoft.com (626) 568-6868
Read more…

InfoSWMM and H2oMAP SWMM 8.5


MWH Soft Releases InfoSWMM and H2OMAP SWMM Version 8.5,
Leveraging the Latest EPA SWMM5 Functionality


Newest Iteration of Industry-Leading Geospatial Urban Drainage Modeling and Design Software
Delivers Expanded Engineering Simulation Value


Broomfield, Colorado USA, November 11, 2009 — MWH Soft, the leading global provider of environmental and water resources applications software, today announced the immediate release of Generation V8.5 of H2OMAP SWMM and InfoSWMM for ArcGIS (ESRI, Redlands, CA). The new version adds powerful features and leverages engine enhancements included in the latest release of EPA SWMM5 (5.0.017). It also improves the breadth and performance by extending MWH Soft tradition of including new enhancements specifically requested by customers. Version 8.5 marks a significant evolution of the company’s SWMM-based urban drainage modeling and design products, which continue to be top choices for the effective evaluation, design, management, rehabilitation and operation of wastewater and stormwater collection systems.

Underlining MWH Soft’s leadership in the wastewater industry, InfoSWMM and H2OMAP SWMM reflect the company’s ongoing commitment to delivering pioneering technology that raises the bar for urban drainage network modeling and simulation, helping to shape the future of this critical sector. The full-featured InfoSWMMurban drainage network analysis and design program is the only urban drainage modeling solution certified by the National Association of GIS-Centric Software (www.nagcs.com). It addresses all operations of a typical sewer system — from analysis and design to management functions such as water quality assessment, pollution prediction, sediment transport, urban flooding, real-time control and record keeping — in a single, fully integrated geoengineering environment whose powerful hydraulic computational engine is endorsed by the USEPA and certified by FEMA.

H2OMAP SWMM is a fully dynamic, geospatial wastewater and stormwater modeling, simulation and management software application. It can be effectively used to model the entire land phase of the hydrologic cycle as applied to urban stormwater and wastewater collection systems. The model can perform single event or long-term (continuous) rainfall-runoff simulations that account for climate, soil, land use, and topographic conditions of the watershed. H2OMAP SWMM supports geocoding and multiple mapping layers which can be imported from one of many data sources, including Computer-Aided Design (CAD) drawings (e.g., dwg, dgn, dxf); CAD world files; standard GIS formats (Shapefiles, Generate files, MID/MIF files, and ArcInfo coverages); Vector Product Format (VPF) files; attribute tables; grid data; image files; and ODBC files; and CSV files.

Focused on expanded hydraulic improvements, V8.5 features a host of unique new capabilities to help wastewater engineers and planners develop better designs and operational improvements faster and more efficiently. They include enhancements to the transition between node surcharging and node flooding, stronger model validation without interrupting simulation runs, greater RDII data compatibility, infiltration changes that allow the SWMM engine to behave more like TR-55 and TR-20, and the addition of default concentration for dry weather flow pollutants to enable more accurate water quality analysis.

“Continued innovation is a hallmark of MWH Soft,” said Paul F. Boulos, Ph.D., Hon.D.WRE, F.ASCE, President and COO of MWH Soft. “This release continues to set a tempo for rapid, quality product development that widely differentiates MWH Softfrom its competitors. The level of innovation in V8.5 is typical of our entire product portfolio, which is unequaled in breadth, depth, strength and best-in-class solutions. These advances empower our customers to solve their most challenging urban drainage problems, increase productivity, improve engineering quality, and optimize system performance and designs. We’ve achieved exciting milestones in this new release but this is just the beginning. Our world-class development team with a very strong engineering focus will provide our customers with future capability that we can only imagine today. We are thrilled to make this one-of-a-kind product available to the extended wastewater and urban drainage modeling communities to help them sustain their infrastructure and improve communities around the world.”

Pricing and availability
Upgrades to InfoSWMM and H2OMAP SWMM V8.5 are now available worldwide by subscription to the Gold, Platinum or Executive program. Subscription members can immediately download the new version free of charge directly from www.mwhsoft.com. The MWH Soft Subscription Program is a friendly customer support and software maintenance program that ensures the longevity and usefulness of MWH Soft products. It gives subscribers instant access to new functionality as it is developed, along with automatic software updates and upgrades. For the latest information on the MWH Soft Subscription Program, visit www.mwhsoft.com or contact your local MWH Soft Certified Representative.

About MWH Soft
MWH Soft is a global leader in infrastructure engineering software and professional solutions designed to meet the technological needs of water, wastewater, and stormwater utilities, government agencies, engineering organizations and academic institutions worldwide. With offices in North America, Europe, and Asia Pacific, MWH Soft product lines empower thousands of engineers to competitively plan, manage, design, protect, maintain and operate highly efficient and reliable infrastructure systems.

Products include unrivalled choice in hydraulic modeling with the multi-user workgroup management (InfoWorks, FloodWorks), stand-alone (H2OMAP), ArcGIS-based (InfoWater, InfoSewer, InfoSWMM), AutoCAD-based (H2ONET) software tools for Water Distribution, Sewer and Drainage, Stormwater Management, River Systems and Flood Forecasting; and advanced Asset, Data, and Capital Planning management platforms (InfoNET,InfoNET Mobile, and CapPlan). For more information call MWH Soft at +1 626-568-6868, or visitwww.mwhsoft.com.

MWH Soft Contact
Adam J. Simonsen
Director of Marketing
Adam.Simonsen@mwhsoft.com
+1 626 568-6868

Read more…

SWMM 5 Applications Manual

A note from LAR of the EPA today. A new SWMM5 Applications Manual is now available for downloading from the EPA SWMM web page http://www.epa.gov/ednnrmrl/models/swmm/index.htm. It was written by Jorge Gironas and Larry Roesner of Colorado State University (Jorge is now on the faculty of the Catholic University of Chile). It contains nine worked-out examples that illustrate how SWMM can be used to model some of the most common types of stormwater management and design problems encountered in practice. These include: computing runoff for both pre- and post development conditions; analyzing the hydraulics of simple collection systems; designing a multi-purpose detention pond; modeling distributed low impact runoff controls; simulating the buildup, washoff, transport and treatment of stormwater pollutants; analyzing both dual drainage and combined sewer systems; and running long-term continuous simulations. Each example is accompanied by a complete SWMM input data file.

The FOREWORD from the Manual

The U.S. Environmental Protection Agency is charged by Congress with protecting the Nation’s land, air, and water resources. Under a mandate of national environmental laws, the Agency strives to formulate and implement actions leading to a compatible balance between human activities and the ability of natural systems to support and nurture life. To meet this mandate, EPA’s research program is providing data and technical support for solving environmental problems today and building a science knowledge base necessary to manage our ecological resources wisely, understand how pollutants affect our health, and prevent or reduce environmental risks in the future.

The National Risk Management Research Laboratory is the Agency’s center for investigation of technological and management approaches for reducing risks from threats to human health and the environment. The focus of the Laboratory’s research program is on methods for the prevention and control of pollution to the air, land, water, and subsurface resources; protection of water quality in public water systems; remediation of contaminated sites and ground water; and prevention and control of indoor air pollution. The goal of this research effort is to catalyze development and implementation of innovative, cost-effective environmental technologies; develop scientific and engineering information needed by EPA to support regulatory and policy decisions; and provide technical support and information transfer to ensure effective implementation of environmental regulations and strategies.

Water quality impairment due to runoff from urban and developing areas continues to be a major threat to the ecological health of our nation’s waterways. The EPA Stormwater Management Model is a computer program that can assess the impacts of such runoff and evaluate the effectiveness of mitigation strategies. This manual presents a number of worked-out examples that shows new users how to set up and apply SWMM to the most common types of stormwater management and design problems encountered in practice.

Sally C. Gutierrez, Director National Risk Management Research Laboratory

Reblog this post [with Zemanta]
Read more…

Using the Link Geometry to Divide the Flow

You can use the geometry of the connecting pipes to divide the flow instead of flow divider in the dynamic wave solution of SWMM 5. You can try to do the same using an Outlet link but the method of using two outlets is sometimes very unstable and requires a small time step just to lower the continuity error. I used two flat rectangular links with the same maximum depth and but different width values (Figure 1). The flow was split based on the value of Q full for the link (which you can see in the text output file (Figure 2).

Figure 1. Link Geometry



Figure 2. Flow Division from a inflow of 10 mgd




Figure 3. SWMM 5.0.016 Cross section geometry for the two rectangular links.
Reblog this post [with Zemanta]
Read more…

Hurricanes 1000 Years Ago

Hurricanes tracks since 1851.

Image via Wikipedia

Medieval Storms Portend Worse Hurricanes

By Phil Berardelli
ScienceNOW Daily News
12 August 2009

Researchers examining ocean sediments have concluded that current climate conditions resemble those that led to peak Atlantic hurricane activity about 1000 years ago. So if you live anywhere from the Caribbean to the coast of Maine, prepare for the possibility of stronger and more frequent storms.

For more than a decade, scientists have been trying to determine whether climate change is linked to intense storms, such as 2005's Hurricane Katrina. Meteorologist Michael Mann of Pennsylvania State University, University Park, and colleagues attacked the question by turning to the past. They looked through drill cores from coastal waters for signs that sediments had been disturbed by major storms. Eight sites along the U.S. East Coast and Puerto Rico provided a reliable record of the number of significant hurricanes going back about 1500 years. Other climate data and models added clues to water temperatures and hurricane intensity.

As the researchers report tomorrow in Nature, they found strong evidence that Atlantic hurricane activity peaked about 1000 years ago, producing up to 15 hurricanes a year on average--a level matched in recent times only over the past decade and a half. At the time, according to estimates constructed from other geologic data, Atlantic water temperatures were relatively warm, "though not as warm as today," Mann says. And Pacific temperatures were relatively cool, thanks to La Niña events. Warmer Atlantic waters whip up more storms, but warmer Pacific temperatures tend to create stronger jet streams that break up those storms. So the twin conditions a millennium ago produced kind of a "Perfect Storm" for hurricanes, he explains.

Of particular interest, the sediments reveal a close link between warmer water and the number of hurricanes during the past 150 years or so. Dropping temperatures produced seven or eight hurricanes a year, while a rising thermometer, such as in the earlier part of this decade, pushed the total to 15. "All other things being equal," Mann says, "this suggests that we are indeed likely to see not only stronger hurricanes in the Atlantic but perhaps more of them" in the near future.

Meteorologist James Elsner of Florida State University in Tallahassee agrees with the findings, but adds a caveat. The historical data do show that a link between warmer ocean temperatures and higher hurricane frequencies has existed for at least 1500 years, he says. However, there's a high degree of uncertainty in the data. That and the fact that the physics explaining the link haven't yet been established, Elsner explains, "indicates this is not the 'smoking gun' we've been looking for that would allow us to confidently project what will happen as the oceans continue to warm."

Atmospheric scientist Kerry Emanuel of the Massachusetts Institute of Technology in Cambridge agrees. The paper "shows that hurricane activity is indeed quite sensitive to climate," he says, and that global warming could have a dramatic impact on the severity of these storms in the future.

http://sciencenow.sciencemag.org/cgi/content/full/2009/812/1?rss=1


Reblog this post [with Zemanta]
Read more…

How a Raindrop Is Like an Exploding Parachute

By Karen C. Fox
ScienceNOW Daily News
20 July 2009

Here's a question for a rainy day: How do clouds create such a wide variety of raindrop sizes? The answer, according to stunning new high-speed movies, is much simpler than physicists thought.

The idea has been that raindrops grow as they gently bump into each other and coalesce. Meanwhile, more forceful collisions break other drops apart into a scattering of smaller droplets. All this action would explain the wide distribution of shapes and sizes. But trying to unravel how the drops crash and break up led to a tough set of equations.

The new movies, however, show a much more straightforward process. Researchers snapped 1000 pictures a second of an isolated water drop as it fell through an ascending air stream. The drop first flattens into a pancake shape, which then balloons like a parachute. The bottommost rim of this chute has a thick, irregularly corrugated rim. Pressure from the air drag eventually breaks the chute apart into numerous smaller droplets--their wide range of sizes is due to the wide range of sizes of the bumps in the rim.

 

 

Credit: Emmanuel Villermaux

Falling water. Watch a water drop break apart in midair.

 

Overall, the process is sufficient to account for a wide variety of raindrop sizes without needing to resort to drops colliding in midair, says lead author and physicist Emmanuel Villermaux of Aix-Marseille Universite in Marseille, France. More importantly, he says, the equations needed to describe the exploding drops are far less complex than those that would be needed to describe many drops colliding with each other, breaking up and coalescing repeatedly over time.

If a single raindrop breaks up in a statistically predictable way, then determining the wide range of sizes in an entire rain shower varies only with the intensity of the rainfall: Heavier rain leads to larger initial drops and a broader size distribution versus fine mists with homogenous, small drops. Villermaux and Aix-Marseille colleague Benjamin Bossa publish their findings online today in Nature Physics.

Physicist Jens Eggers who studies the dynamics of water drops at the University of Bristol in the United Kingdom is breathing a sigh of relief. "I was expecting things to get complicated, with lots of empirical relationships thrown together," he writes in an e-mail to Science. "Instead, based on a few physical ideas, the authors manage to explain a beautiful empirical relationship ... in a simple and universal way."

Atmospheric scientists, who have long believed that raindrop size is determined inside a cloud and from complex interactions as they fall, may take more convincing. "Mainstream cloud physicists will reject this thesis," e-mails Ramesh Srivastava, an atmospheric scientist who studies cloud dynamics at the University of Chicago in Illinois. Srivastava says that rain size distribution in practice does not seem to correlate to the paper's predictions; whereas Villermaux says evidence shows correlation at 100 meters below the cloud after the drops have had a chance to break up.

Regardless of who's right, the work isn't likely to see application any time soon. Villermaux says the findings are unlikely to aid weather forecasting or climate modeling, for example. "It's just for the pleasure of understanding."

http://sciencenow.sciencemag.org/cgi/content/full/2009/720/2?rss=1



Reblog this post [with Zemanta]
Read more…

Get Smarter for the Future

Ideas: Technology July/August 2009

Pandemics. Global warming. Food shortages. No more fossil fuels. What are humans to do? The same thing the species has done before: evolve to meet the challenge. But this time we don’t have to rely on natural evolution to make us smart enough to survive. We can do it ourselves, right now, by harnessing technology and pharmacology to boost our intelligence. Is Google actually making us smarter?

by Jamais Cascio

Get Smarter

Image: Anastasia Vasilakis

Seventy-four thousand years ago, humanity nearly went extinct. A super-volcano at what’s now Lake Toba, in Sumatra, erupted with a strength more than a thousand times that of Mount St. Helens in 1980. Some 800 cubic kilometers of ash filled the skies of the Northern Hemisphere, lowering global temperatures and pushing a climate already on the verge of an ice age over the edge. Some scientists speculate that as the Earth went into a deep freeze, the population of Homo sapiens may have dropped to as low as a few thousand families.

The Mount Toba incident, although unprecedented in magnitude, was part of a broad pattern. For a period of 2 million years, ending with the last ice age around 10,000 B.C., the Earth experienced a series of convulsive glacial events. This rapid-fire climate change meant that humans couldn’t rely on consistent patterns to know which animals to hunt, which plants to gather, or even which predators might be waiting around the corner.

How did we cope? By getting smarter. The neuro­physi­ol­ogist William Calvin argues persuasively that modern human cognition—including sophisticated language and the capacity to plan ahead—evolved in response to the demands of this long age of turbulence. According to Calvin, the reason we survived is that our brains changed to meet the challenge: we transformed the ability to target a moving animal with a thrown rock into a capability for foresight and long-term planning. In the process, we may have developed syntax and formal structure from our simple language.

Our present century may not be quite as perilous for the human race as an ice age in the aftermath of a super-volcano eruption, but the next few decades will pose enormous hurdles that go beyond the climate crisis. The end of the fossil-fuel era, the fragility of the global food web, growing population density, and the spread of pandemics, as well as the emergence of radically transformative bio- and nano­technologies—each of these threatens us with broad disruption or even devastation. And as good as our brains have become at planning ahead, we’re still biased toward looking for near-term, simple threats. Subtle, long-term risks, particularly those involving complex, global processes, remain devilishly hard for us to manage.

But here’s an optimistic scenario for you: if the next several decades are as bad as some of us fear they could be, we can respond, and survive, the way our species has done time and again: by getting smarter. But this time, we don’t have to rely solely on natural evolutionary processes to boost our intelligence. We can do it ourselves.

Most people don’t realize that this process is already under way. In fact, it’s happening all around us, across the full spectrum of how we understand intelligence. It’s visible in the hive mind of the Internet, in the powerful tools for simulation and visualization that are jump-starting new scientific disciplines, and in the development of drugs that some people (myself included) have discovered let them study harder, focus better, and stay awake longer with full clarity. So far, these augmentations have largely been outside of our bodies, but they’re very much part of who we are today: they’re physically separate from us, but we and they are becoming cognitively inseparable. And advances over the next few decades, driven by breakthroughs in genetic engineering and artificial intelligence, will make today’s technologies seem primitive. The nascent jargon of the field describes this as “ intelligence augmentation.” I prefer to think of it as “You+.”

Scientists refer to the 12,000 years or so since the last ice age as the Holocene epoch. It encompasses the rise of human civilization and our co-evolution with tools and technologies that allow us to grapple with our physical environment. But if intelligence augmentation has the kind of impact I expect, we may soon have to start thinking of ourselves as living in an entirely new era. The focus of our technological evolution would be less on how we manage and adapt to our physical world, and more on how we manage and adapt to the immense amount of knowledge we’ve created. We can call it the Nöocene epoch, from Pierre Teilhard de Chardin’s concept of the Nöosphere, a collective consciousness created by the deepening interaction of human minds. As that epoch draws closer, the world is becoming a very different place.

Of course, we’ve been augmenting our ability to think for millennia. When we developed written language, we significantly increased our functional memory and our ability to share insights and knowledge across time and space. The same thing happened with the invention of the printing press, the telegraph, and the radio. The rise of urbanization allowed a fraction of the populace to focus on more-cerebral tasks—a fraction that grew inexorably as more-complex economic and social practices demanded more knowledge work, and industrial technology reduced the demand for manual labor. And caffeine and nicotine, of course, are both classic cognitive-enhancement drugs, primitive though they may be.

With every technological step forward, though, has come anxiety about the possibility that technology harms our natural ability to think. These anxieties were given eloquent expression in these pages by Nicholas Carr, whose essay “Is Google Making Us Stupid?” (July/August 2008 Atlantic) argued that the information-dense, hyperlink-rich, spastically churning Internet medium is effectively rewiring our brains, making it harder for us to engage in deep, relaxed contemplation.

Carr’s fears about the impact of wall-to-wall connectivity on the human intellect echo cyber-theorist Linda Stone’s description of “continuous partial attention,” the modern phenomenon of having multiple activities and connections under way simultaneously. We’re becoming so accustomed to interruption that we’re starting to find focusing difficult, even when we’ve achieved a bit of quiet. It’s an induced form of ADD—a “continuous partial attention-deficit disorder,” if you will.

There’s also just more information out there—because unlike with previous information media, with the Internet, creating material is nearly as easy as consuming it. And it’s easy to mistake more voices for more noise. In reality, though, the proliferation of diverse voices may actually improve our overall ability to think. In Everything Bad Is Good for You, Steven Johnson argues that the increasing complexity and range of media we engage with have, over the past century, made us smarter, rather than dumber, by providing a form of cognitive calisthenics. Even pulp-television shows and video games have become extraordinarily dense with detail, filled with subtle references to broader subjects, and more open to interactive engagement. They reward the capacity to make connections and to see patterns—precisely the kinds of skills we need for managing an information glut.

Scientists describe these skills as our “fluid intelligence”—the ability to find meaning in confusion and to solve new problems, independent of acquired knowledge. Fluid intelligence doesn’t look much like the capacity to memorize and recite facts, the skills that people have traditionally associated with brainpower. But building it up may improve the capacity to think deeply that Carr and others fear we’re losing for good. And we shouldn’t let the stresses associated with a transition to a new era blind us to that era’s astonishing potential. We swim in an ocean of data, accessible from nearly anywhere, generated by billions of devices. We’re only beginning to explore what we can do with this knowledge-at-a-touch.

Moreover, the technology-induced ADD that’s associated with this new world may be a short-term problem. The trouble isn’t that we have too much information at our fingertips, but that our tools for managing it are still in their infancy. Worries about “information overload” predate the rise of the Web (Alvin Toffler coined the phrase in 1970), and many of the technologies that Carr worries about were developed precisely to help us get some control over a flood of data and ideas. Google isn’t the problem; it’s the beginning of a solution.

In any case, there’s no going back. The information sea isn’t going to dry up, and relying on cognitive habits evolved and perfected in an era of limited information flow—and limited information access—is futile. Strengthening our fluid intelligence is the only viable approach to navigating the age of constant connectivity.

When people hear the phrase intelligence augmentation, they tend to envision people with computer chips plugged into their brains, or a genetically engineered race of post-human super-geniuses. Neither of these visions is likely to be realized, for reasons familiar to any Best Buy shopper. In a world of on­going technological acceleration, today’s cutting-edge brain implant would be tomorrow’s obsolete junk—and good luck if the protocols change or you’re on the wrong side of a “format war” (anyone want a Betamax implant?). And then there’s the question of stability: Would you want a chip in your head made by the same folks that made your cell phone, or your PC?

Likewise, the safe modification of human genetics is still years away. And even after genetic modification of adult neurobiology becomes possible, the science will remain in flux; our understanding of how augmentation works, and what kinds of genetic modifications are possible, would still change rapidly. As with digital implants, the brain modification you might undergo one week could become obsolete the next. Who would want a 2025-vintage brain when you’re competing against hotshots with Model 2026?

Yet in one sense, the age of the cyborg and the super-genius has already arrived. It just involves external information and communication devices instead of implants and genetic modification. The bioethicist James Hughes of Trinity College refers to all of this as “exo­cortical technology,” but you can just think of it as “stuff you already own.” Increasingly, we buttress our cognitive functions with our computing systems, no matter that the connections are mediated by simple typing and pointing. These tools enable our brains to do things that would once have been almost unimaginable:

• powerful simulations and massive data sets allow physicists to visualize, understand, and debate models of an 11‑dimension universe;

• real-time data from satellites, global environmental databases, and high-resolution models allow geophysicists to recognize the subtle signs of long-term changes to the planet;

• cross-connected scheduling systems allow anyone to assemble, with a few clicks, a complex, multimodal travel itinerary that would have taken a human travel agent days to create.

If that last example sounds prosaic, it simply reflects how embedded these kinds of augmentation have become. Not much more than a decade ago, such a tool was outrageously impressive—and it destroyed the travel-agent industry.

That industry won’t be the last one to go. Any occupation requiring pattern-matching and the ability to find obscure connections will quickly morph from the domain of experts to that of ordinary people whose intelligence has been augmented by cheap digital tools. Humans won’t be taken out of the loop—in fact, many, many more humans will have the capacity to do something that was once limited to a hermetic priesthood. Intelligence augmentation decreases the need for specialization and increases participatory complexity.

As the digital systems we rely upon become faster, more sophisticated, and (with the usual hiccups) more capable, we’re becoming more sophisticated and capable too. It’s a form of co-evolution: we learn to adapt our thinking and expectations to these digital systems, even as the system designs become more complex and powerful to meet more of our needs—and eventually come to adapt to us.

Consider the Twitter phenomenon, which went from nearly invisible to nearly ubiquitous (at least among the online crowd) in early 2007. During busy periods, the user can easily be overwhelmed by the volume of incoming messages, most of which are of only passing interest. But there is a tiny minority of truly valuable posts. (Sometimes they have extreme value, as they did during the October 2007 wildfires in California and the November 2008 terrorist attacks in Mumbai.) At present, however, finding the most-useful bits requires wading through messages like “My kitty sneezed!” and “I hate this taco!”

But imagine if social tools like Twitter had a way to learn what kinds of messages you pay attention to, and which ones you discard. Over time, the messages that you don’t really care about might start to fade in the display, while the ones that you do want to see could get brighter. Such attention filters—or focus assistants—are likely to become important parts of how we handle our daily lives. We’ll move from a world of “continuous partial attention” to one we might call “continuous augmented awareness.”

As processor power increases, tools like Twitter may be able to draw on the complex simulations and massive data sets that have unleashed a revolution in science. They could become individualized systems that augment our capacity for planning and foresight, letting us play “what-if” with our life choices: where to live, what to study, maybe even where to go for dinner. Initially crude and clumsy, such a system would get better with more data and more experience; just as important, we’d get better at asking questions. These systems, perhaps linked to the cameras and microphones in our mobile devices, would eventually be able to pay attention to what we’re doing, and to our habits and language quirks, and learn to interpret our sometimes ambiguous desires. With enough time and complexity, they would be able to make useful suggestions without explicit prompting.

And such systems won’t be working for us alone. Intelligence has a strong social component; for example, we already provide crude cooperative information-filtering for each other. In time, our interactions through the use of such intimate technologies could dovetail with our use of collaborative knowledge systems (such as Wikipedia), to help us not just to build better data sets, but to filter them with greater precision. As our capacity to provide that filter gets faster and richer, it increasingly becomes something akin to collaborative intuition—in which everyone is effectively augmenting everyone else.

In pharmacology, too, the future is already here. One of the most prominent examples is a drug called modafinil. Developed in the 1970s, modafinil—sold in the U.S. under the brand name Provigil—appeared on the cultural radar in the late 1990s, when the American military began to test it for long-haul pilots. Extended use of modafinil can keep a person awake and alert for well over 32 hours on end, with only a full night’s sleep required to get back to a normal schedule.

While it is FDA-approved only for a few sleep disorders, like narcolepsy and sleep apnea, doctors increasingly prescribe it to those suffering from depression, to “shift workers” fighting fatigue, and to frequent business travelers dealing with time-zone shifts. I’m part of the latter group: like more and more professionals, I have a prescription for modafinil in order to help me overcome jet lag when I travel internationally. When I started taking the drug, I expected it to keep me awake; I didn’t expect it to make me feel smarter, but that’s exactly what happened. The change was subtle but clear, once I recognized it: within an hour of taking a standard 200-mg tablet, I was much more alert, and thinking with considerably more clarity and focus than usual. This isn’t just a subjective conclusion. A University of Cambridge study, published in 2003, concluded that modafinil confers a measurable cognitive-enhancement effect across a variety of mental tasks, including pattern recognition and spatial planning, and sharpens focus and alertness.

I’m not the only one who has taken advantage of this effect. The Silicon Valley insider webzine Tech Crunch reported in July 2008 that some entrepreneurs now see modafinil as an important competitive tool. The tone of the piece was judgmental, but the implication was clear: everybody’s doing it, and if you’re not, you’re probably falling behind.

This is one way a world of intelligence augmentation emerges. Little by little, people who don’t know about drugs like modafinil or don’t want to use them will face stiffer competition from the people who do. From the perspective of a culture immersed in athletic doping wars, the use of such drugs may seem like cheating. From the perspective of those who find that they’re much more productive using this form of enhancement, it’s no more cheating than getting a faster computer or a better education.

Modafinil isn’t the only example; on college campuses, the use of ADD drugs (such as Ritalin and Adderall) as study aids has become almost ubiquitous. But these enhancements are primitive. As the science improves, we could see other kinds of cognitive-modification drugs that boost recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but end up being used to make us “better than normal.” Eventually, some of these may become over-the-counter products at your local pharmacy, or in the juice and snack aisles at the supermarket. Spam e-mail would be full of offers to make your brain bigger, and your idea production more powerful.

Such a future would bear little resemblance to Brave New World or similar narcomantic nightmares; we may fear the idea of a population kept doped and placated, but we’re more likely to see a populace stuck in overdrive, searching out the last bits of competitive advantage, business insight, and radical innovation. No small amount of that innovation would be directed toward inventing the next, more powerful cognitive-enhancement technology.

This would be a different kind of nightmare, perhaps, and cause waves of moral panic and legislative restriction. Safety would be a huge issue. But as we’ve found with athletic doping, if there’s a technique for beating out rivals (no matter how risky), shutting it down is nearly impossible. This would be yet another pharmacological arms race—and in this case, the competitors on one side would just keep getting smarter.

The most radical form of superhuman intelligence, of course, wouldn’t be a mind augmented by drugs or exocortical technology; it would be a mind that isn’t human at all. Here we move from the realm of extrapolation to the realm of speculation, since solid predictions about artificial intelligence are notoriously hard: our understanding of how the brain creates the mind remains far from good enough to tell us how to construct a mind in a machine.

But while the concept remains controversial, I see no good argument for why a mind running on a machine platform instead of a biological platform will forever be impossible; whether one might appear in five years or 50 or 500, however, is uncertain. I lean toward 50, myself. That’s enough time to develop computing hardware able to run a high-speed neural network as sophisticated as that of a human brain, and enough time for the kids who will have grown up surrounded by virtual-world software and household robots—that is, the people who see this stuff not as “Technology,” but as everyday tools—to come to dominate the field.

Many proponents of developing an artificial mind are sure that such a breakthrough will be the biggest change in human history. They believe that a machine mind would soon modify itself to get smarter—and with its new intelligence, then figure out how to make itself smarter still. They refer to this intelligence explosion as “the Singularity,” a term applied by the computer scientist and science-fiction author Vernor Vinge. “Within thirty years, we will have the technological means to create superhuman intelligence,” Vinge wrote in 1993. “Shortly after, the human era will be ended.” The Singularity concept is a secular echo of Teilhard de Chardin’s “Omega Point,” the culmination of the Nöosphere at the end of history. Many believers in Singularity—which one wag has dubbed “the Rapture for nerds”—think that building the first real AI will be the last thing humans do. Some imagine this moment with terror, others with a bit of glee.

My own suspicion is that a stand-alone artificial mind will be more a tool of narrow utility than something especially apocalyptic. I don’t think the theory of an explosively self-improving AI is convincing—it’s based on too many assumptions about behavior and the nature of the mind. Moreover, AI researchers, after years of talking about this prospect, are already ultra-conscious of the risk of runaway systems.

More important, though, is that the same advances in processor and process that would produce a machine mind would also increase the power of our own cognitive-enhancement technologies. As intelligence augmentation allows us to make ourselves smarter, and then smarter still, AI may turn out to be just a sideshow: we could always be a step ahead.

So what’s life like in a world of brain doping, intuition networks, and the occasional artificial mind?

Banal.

Not from our present perspective, of course. For us, now, looking a generation ahead might seem surreal and dizzying. But remember: people living in, say, 2030 will have lived every moment from now until then—we won’t jump into the future. For someone going from 2009 to 2030 day by day, most of these changes wouldn’t be jarring; instead, they’d be incremental, almost overdetermined, and the occasional surprises would quickly blend into the flow of inevitability.

By 2030, then, we’ll likely have grown accustomed to (and perhaps even complacent about) a world where sophisticated foresight, detailed analysis and insight, and augmented awareness are commonplace. We’ll have developed a better capacity to manage both partial attention and laser-like focus, and be able to slip between the two with ease—perhaps by popping the right pill, or eating the right snack. Sometimes, our augmentation assistants will handle basic interactions on our behalf; that’s okay, though, because we’ll increasingly see those assistants as extensions of ourselves.

The amount of data we’ll have at our fingertips will be staggering, but we’ll finally have gotten over the notion that accumulated information alone is a hallmark of intelligence. The power of all of this knowledge will come from its ability to inform difficult decisions, and to support complex analysis. Most professions will likely use simulation and modeling in their day-to-day work, from political decisions to hairstyle options. In a world of augmented intelligence, we will have a far greater appreciation of the consequences of our actions.

This doesn’t mean we’ll all come to the same conclusions. We’ll still clash with each other’s emotions, desires, and beliefs. If anything, our arguments will be more intense, buttressed not just by strongly held opinions but by intricate reasoning. People in 2030 will look back aghast at how ridiculously unsubtle the political and cultural disputes of our present were, just as we might today snicker at simplistic advertising from a generation ago.

Conversely, the debates of the 2030s would be remarkable for us to behold. Nuance and multiple layers will characterize even casual disputes; our digital assistants will be there to catch any references we might miss. And all of this will be everyday, banal reality. Today, it sounds mind-boggling; by then, it won’t even merit comment.

What happens if such a complex system collapses? Disaster, of course. But don’t forget that we already depend upon enormously complex systems that we no longer even think of as technological. Urbanization, agriculture, and trade were at one time huge innovations. Their collapse (and all of them are now at risk, in different ways, as we have seen in recent months) would be an even greater catastrophe than the collapse of our growing webs of interconnected intelligence.

A less apocalyptic but more likely danger derives from the observation made by the science-fiction author William Gibson: “The future is already here, it’s just unevenly distributed.” The rich, whether nations or individuals, will inevitably gain access to many augmentations before anyone else. We know from history, though, that a world of limited access wouldn’t last forever, even as the technology improved: those who sought to impose limits would eventually face angry opponents with newer, better systems.

Even as competition provides access to these kinds of technologies, though, development paths won’t be identical. Some societies may be especially welcoming to biotech boosts; others may prefer to use digital tools. Some may readily adopt collaborative approaches; others may focus on individual enhancement. And around the world, many societies will reject the use of intelligence-enhancement technology entirely, or adopt a cautious wait-and-see posture.

The bad news is that these divergent paths may exacerbate cultural divides created by already divergent languages and beliefs. National rivalries often emphasize cultural differences, but for now we’re all still standard human beings. What happens when different groups quite literally think in very, very different ways?

The good news, though, is that this diversity of thought can also be a strength. Coping with the various world-histori­cal dangers we face will require the greatest possible insight, creativity, and innovation. Our ability to build the future that we want—not just a future we can survive—depends on our capacity to understand the complex relationships of the world’s systems, to take advantage of the diversity of knowledge and experience our civilization embodies, and to fully appreciate the implications of our choices. Such an ability is increasingly within our grasp. The Nöocene awaits.

Reblog this post [with Zemanta]
Read more…