Teži tekstovi

One of the most hazardous conditions a firefighter will ever encounter is a backdraft (also known as a smoke explosion). A backdraft can occur in the hot-smoldering phase of a fire when burning is incomplete and there is not enough oxygen to sustain the fire. Unburned carbon particles and other flammable products, combined with the intense heat, may cause instantaneous combustion if more oxygen reaches the fire. Firefighters should be aware of the conditions that indicate the possibility for a backdraft to occur. When there is a lack of oxygen during a fire, the smoke becomes filled with carbon dioxide or carbon monoxide and turns dense gray or black. Other warning signs of a potential backdraft are little or no visible flame, excessive heat, smoke leaving the building in puffs, muffled sounds, and smoke-stained windows. Proper ventilation will make a backdraft less likely. Opening a room or building at the highest point allows heated gases and smoke to be released gradually. However, suddenly breaking a window or opening a door is a mistake, because it allows oxygen to rush in, causing an explosion.


The human body can tolerate only a small range of temperature, especially when the person is engaged in vigorous activity. Heat reactions usually occur when large amounts of water and/or salt are lost through excessive sweating following strenuous exercise. When the body becomes overheated and cannot eliminate this excess heat, heat exhaustion and heat stroke are possible. Heat exhaustion is generally characterized by clammy skin, fatigue, nausea, dizziness, profuse perspiration, and sometimes fainting, resulting from an inadequate intake of water and the loss of fluids. First aid treatment for this condition includes having the victim lie down, raising the feet 8 to 12 inches, applying cool, wet cloths to the skin, and giving the victim sips of salt water (1 teaspoon per glass, half a glass every 15 minutes) over a 1-hour period. Heat stroke is much more serious; it is an immediate life-threatening situation. The characteristics of heat stroke are a high body temperature (which may reach 106° F or more); a rapid pulse; hot, dry skin; and a blocked sweating mechanism. Victims of this condition may be unconscious, and first-aid measures should be directed at quickly cooling the body. The victim should be placed in a tub of cold water or repeatedly sponged with cool water until his or her temperature is sufficiently lowered. Fans or air conditioners will also help with the cooling process. Care should be taken, however, not to over-chill the victim once the temperature is below 102° F.


It is surprisingly difficult to determine whether water is truly scarce in the physical sense at a global scale (a supply problem) or whether it is available but should be used better (a demand problem). Rijsberman (2006) reviews water scarcity indicators and global assessments based on these indicators. The most widely used indicator, the Falkenmark indicator, is popular because it is easy to apply and understand, but it does not help to explain the true nature of water scarcity. The more complex indicators are not widely applied because data are lacking to apply them and the definitions are not intuitive. Water is definitely physically scarce in densely populated arid areas such as Central and West Asia, and North Africa, with projected availabilities of less than 1000 m³/capita/year. This scarcity relates to water for food production, however, and not to water for domestic purposes that are minute at this scale. In most of the rest of the world, water scarcity at a national scale has as much to do with the development of the demand as the availability of the supply. Accounting for water for environmental requirements shows that abstraction of water for domestic, food and industrial uses already has a major impact on ecosystems in many parts of the world, even those not considered ‘water scarce’.


If you have ever studied philosophers, you have surely been exposed to the teachings of Aristotle. A great thinker, Aristotle examines ideas such as eudaimonia (happiness), virtue, friendship, pleasure, and other character traits of human beings in his works. In his writings, Aristotle suggests that the goal of all human beings is to achieve happiness. Everything that we do, then, is for this purpose, even when our actions do not explicitly demonstrate this. For instance, Aristotle reasons that even when we seek out friendships, we are indirectly aspiring to be happy, for it is through our friendships, we believe, that we will find happiness. Aristotle asserts that there are three reasons why we choose to be friends with someone: because he is virtuous, because he has something to offer to us, or because he is pleasant. When two people are equally virtuous, Aristotle classifies their friendship as perfect. When, however, there is a disparity between the two friends’ moral fiber; or when one friend is using the other for personal gain and or pleasure alone, Aristotle claims that the friendship is imperfect. In a perfect friendship—in this example, let’s call one person friend A and the other friend B—friend A wishes friend B success for his own sake. Friend A and friend B spend time together and learn from each other, and make similar decisions. Aristotle claims, though, that a relationship of this type is merely a reflection of our relationship with ourselves. In other words, we want success for ourselves, we spend time alone with ourselves, and we make the same kinds of decisions over and over again. So, a question that Aristotle raises, then, is: Is friendship really another form of self-love?


The growth of cities, the construction of hundreds of new factories, and the spread of railroads in the United States before 1850 had increased the need for better illumination. But the lighting in American homes had improved very little over that of ancient times. Through the colonial period, homes were lit with tallow candles or with a lamp of the kind used in ancient Rome — a dish of fish oil or other animal or vegetable oil in which a twisted rag served as a wick. Some people used lard, but they had to heat charcoal underneath to keep it soft and burnable. The sperm whale provided a superior burning oil, but this was expensive. In 1830 a new substance called "camphene" was patented, and it proved to be an excellent illuminant. But while camphene gave a bright light it too remained expensive, had an unpleasant odor, and also was dangerously explosive. Between 1830 and 1850 it seemed that the only hope for cheaper illumination in the United States was in the wider use of gas. In the 1840's American gas manufacturers adopted improved British techniques for producing illuminating gas from coal. But the expense of piping gas to the consumer remained so high that until midcentury gaslighting was feasible only in urban areas, and only for public buildings or for the wealthy. In 1854 a Canadian doctor, Abraham Gesner, patented a process for distilling a pitch like mineral found in New Brunswick and Nova Scotia that produced illuminating gas and an oil that he called "kerosene" (from "keros," the Greek word for wax, and "ene" because it resembled camphene). Kerosene, though cheaper than camphene, had an unpleasant odor, and Gesner never made his fortune from it. But Gesner had aroused anew hope for making an illuminating oil from a product coming out of North American mines.


The ice sheet that blanketed much of North America during the last glaciation was in the areas of maximum accumulation more than a mile thick. Everywhere the glacier lay, its work is evident today. Valleys were scooped out and rounded by the moving ice; peaks were scraped clean. Huge quantities of rock were torn from the northern lands and carried south. Long, high east-west ridges of this eroded debris were deposited by the ice at its melting southern margin. Furthermore, the weight of the huge mass of ice depressed the crust of the Earth in some parts of Canada by over a thousand feet. The crust is still rebounding from that depression. In North America, perhaps the most conspicuous features of the postglacial landscape are the Great Lakes on the border between the United States and Canada. No other large freshwater body lies at such favorable latitudes. The history of the making of these lakes is long and complex. As the continental ice sheet pushed down from its primary centers of accumulation in Canada, it moved forward in lobes of ice that followed the existing lowlands. Before the coming of the ice, the basins of the present Great Lakes were simply the lowest-lying regions of a gently undulating plain. The moving tongues of ice scoured and deepened these lowlands as the glacier made its way toward its eventual terminus near the present Ohio and Missouri rivers. About 16,000 years ago the ice sheet stood for a long time with its edge just to the south of the present great Lakes. Erosional debris carried by the moving ice was dumped at the melting southern edge of the glacier and built up long ridges called terminal moraines. When the ice began to melt back from this position about 14,000 years ago, meltwater collected behind the dams formed by the moraines. The crust behind the moraines was still depressed from the weight of the ice it had borne, and this too helped create the Great Lakes. The first of these lakes drained southward across Illinois and Indiana, along the channels of the present Illinois and Wabash rivers.


The Hardy-Weinberg equilibrium is a principle stating that the genetic variation in a population will remain constant from one generation to the next in the absence of disturbing factors. The principle predicts that both genotype and allele frequencies will remain constant. The Hardy-Weinberg principle describes an idealized state of a population. For a population to be in this kind of state, there can’t be any gene mutations, migrations of individuals, genetic drift and natural selection. Also, random mating must occur. When all these conditions are met, it’s said that the population is in equilibrium. But because all of these things commonly occur in nature, the Hardy-Weinberg equilibrium rarely applies in reality. However, The Hardy-Weinberg equations can still be used for any population, even if it is not in equilibrium. There are two equations: 𝑝 + 𝑞 = 1 and 𝑝² + 2𝑝𝑞 + 𝑞² = 1, where 𝑝 is the frequency of the dominant allele, 𝑞 is the frequency of the recessive allele, 𝑝² is the frequency of individuals with the homozygous dominant genotype, 2𝑝𝑞 is the frequency of individuals with the heterozygous genotype and𝑞² is the frequency of individuals with the homozygous recessive genotype. The first equation tells us that the sum of the frequencies of all alleles of one gene locus in one generation is 100%, while the second one tells us that the sum of the frequencies of all genotypes for one gene locus in one population is also100%.


The penny press, which emerged in the United States during the 18-30's, was a powerful agent of mass communication. These newspapers were little dailies, generally four pages in length, written for the mass taste. They differed from the staid, formal presentation of the conservative press, with its emphasis on political and literary topics. The new papers were brief and cheap, emphasizing sensational reports of police courts and juicy scandals as well as human interest stories. Twentieth-century journalism was already foreshadowed in the penny press of the 1830's.The New York Sun, founded in 1833, was the first successful penny paper, and it was followed two years later by the New York Herald, published by James Gordon Bennett. Not long after, Horace Greeley issued the New York Tribune, which was destined to become the most influential paper in America. Greeley gave space to the issues that deeply touched the American people before the Civil War — abolitionism, temperance, free homesteads, Utopian cooperative settlements, and the problems of labor. Th weekly edition of the Tribune, with 100,000 subscribers, had a remarkable influence in rural areas, especially in Western communities. Americans were reputed to be the most avid readers of periodicals in the world. An English observer enviously calculated that, in 1829, the number of newspapers circulated in Great Britain was enough to reach only one out of every thirty-six inhabitants weekly; Pennsylvania in that same year had a newspaper circulation which reached one out of every four inhabitants weekly. Statistics seemed to justify the common belief that Americans were devoted to periodicals. Newspapers in the United States increased from1,200 in 1833 to 3,000 by the early 1860' s, on the eve of the Civil War. This far exceeded the number and circulation of newspapers in England and France.


Beads were probably the first durable ornaments humans possessed, and the intimate relationship they had with their owners is reflected in the fact that beads are among the most common items found in ancient archaeological sites. In the past, as today, men, women, and children adorned themselves with beads. In some cultures still, certain beads are often worn from birth until death, and then are buried with their owners for the afterlife. Abrasion due to daily wear alters the surface features of beads, and if they are buried for long, the effects of corrosion can further change their appearance. Thus, interest is imparted to the bead both by use and the effects of time. Besides their wearability, either as jewelry or incorporated into articles of attire, beads possess the desirable characteristics of every collectible: they are durable, portable, available in infinite variety, and often valuable in their original cultural context as well as in today's market. Pleasing to look at and touch, beads come in shapes, colors, and materials that almost compel one to handle them and to sort them. Beads are miniature bundles of secrets waiting to be revealed: their history, manufacture, cultural context, economic role, and ornamental use are all points of information one hopes to unravel. Even the most mundane beads may have traveled great distances and been exposed to many human experiences. The bead researcher must gather information from many diverse fields. In addition to having to be a generalist while specializing in what may seem to be a narrow field, the researcher is faced with the problem of primary materials that have little or no documentation. Many ancient beads that are of ethnographic interest have often been separated from their original cultural context. The special attractions of beads contribute to the uniqueness of bead research. While often regarded as the "small change of civilizations,” beads are a part of every culture, and they can often be used to date archaeological sites and to designate the degree of mercantile, technological, and cultural sophistication.


The United States is the only industrialized nation in the world that does not provide healthcare to all of its citizens. Instead, healthcare for those under 65 is managed by a complex web of insurance companies, representing mostly for-profit business. This results in exorbitant healthcare premiums, leaving approximately45 million citizens uninsured and unable to receive regular healthcare. And this is not limited to those who are unemployed. Many businesses can’t afford to provide their employees with health insurance, leaving not just the poor, but also the working middle-class to fend for themselves. The best solution to this crisis is to move toward a single-payer system. Simply put, this would entail financing healthcare through a single source, most likely the federal government. Everyone would be covered under this system, regardless of age, preexisting conditions, or employment status. Although income and sales taxes would be progressively increased to fund universal healthcare, the benefits far outweigh the drawbacks. For instance, this public system would be more inexpensive to run than the current system. Administrative costs would be centralized and therefore greatly reduced. Money would no longer be spent frivolously as it is now in the for-profit sector. Currently, insurance companies spend millions on advertisements, market analysis, utilization review, patient tracking, and CEO salaries. All of that money could be used instead for what it should be, the provision of medical services. In Canada, for instance, which acknowledges that healthcare is a right of every citizen and implements the single-payer system, spends only 8% on administration, whereas the United States spends approximately 24% for the same purpose. Also, the single-payer system puts healthcare back in the hands of the physicians. They will be able to make decisions based on what is best for their patients, not on what insurance companies deem allowable. Furthermore, universal healthcare will increase the mortality of U.S. citizens by 25%. Studies suggest that in countries where healthcare is universal, citizens visit their primary care physicians more frequently, and as a result, stay healthier by taking preventative measures.


Some observers have attributed the dramatic growth in temporary employment that occurred in the United States during the 1980’s to increased participation in the workforce by certain groups, such as first-time or reentering workers, who supposedly prefer such arrangements. However, statistical analyses reveal that demographic changes in the workforce did not correlate with variations in the total number of temporary workers. Instead, these analyses suggest that factors affecting employers account for the rise in temporary employment. One factor is product demand: temporary employment is favored by employers who are adapting to fluctuating demand for products while at the same time seeking to reduce overall labor costs. Another factor is labor’s reduced bargaining strength, which allows employers more control over the terms of employment. Given the analyses, which reveal that  growth in temporary employment now far exceeds the level explainable by recent workforce entry rates of groups said to prefer temporary jobs, firms should be discouraged from creating excessive numbers of temporary positions. Government policymakers should consider mandating benefit coverage for temporary employees, promoting pay equity between temporary and permanent workers, assisting labor unions in organizing temporary workers, and encouraging firms to assign temporary jobs primarily to employees who explicitly indicate that preference


Throughout the centuries, various writers have contributed greatly to the literary treasure trove of books lining the shelves of today’s libraries. In addition to writing interesting material, many famous writers, such as Edgar Allan Poe, were larger-than-life characters with personal histories that are as interesting to read as the stories they wrote. Poe’s rocky life included expulsion from the United States Military Academy at WestPoint in 1831 and an ongoing battle with alcohol. Yet, despite heavy gambling debts, poor health, and chronic unemployment, Poe managed to produce a body of popular works, including “The Raven” and “The Murders in the Rue Morgue.” Herman Melville, author of Moby Dick, once lived among the cannibals in the Marquesas Islands and wrote exotic tales inspired by his years of service in the U.S. Navy. Dublin-born Oscar Wilde was noted for his charismatic personality, his outrageous lifestyle, and creating witty catchphrases such as, “Nothing succeeds like excess.” D. H. Lawrence wrote scandalous novels that were often censored, and Anne Rice led a double life writing bestselling vampire novels under her real name and using the nom de plume “A. N. Roquelaure” for the lowbrow erotica novels she penned on the side. Nonconformist author and naturalist Henry David Thoreau once fled to the woods and generated enough interesting material to fill his noted book Walden. Thoreau wrote on the issue of passive resistance protest in his essay “Civil Disobedience” and served time in jail for withholding tax payments in protest of the United States government’s policy towards slavery. American short story writer O. Henry’s colorful life was marred by tragic events, such as being accused and sentenced for allegedly stealing money from an Austin, Texas bank. Despite his success selling his short stories, O. Henry struggled financially and was nearly bankrupt when he died. As diverse as these famous authors’ backgrounds were, they all led unconventional lives while writing great literary works that will endure throughout the ages. The next time you read an interesting book, consider learning more about the author by reading his or her biography so you can learn about the unique life experiences that shaped his or her writing.


Crows are probably the most frequently met and easily identifiable members of the native fauna of the United States. The great number of tales, legends, and myths about these birds indicates that people have been exceptionally interested in them for a long time. On the other hand, when it comes to substantive — particularly behavioral — information, crows are less well known than many comparably common species and, for that matter, not a few quite uncommon ones: the endangered California condor, to cite one obvious example. There are practical reasons for this. Crows are notoriously poor and aggravating subjects for field research. Keen observers and quick learners, they are astute about the intentions of other creatures, including researchers, and adept at avoiding them. Because they are so numerous, active, and monochromatic, it is difficult to distinguish one crow from another. Bands, radio transmitters, or other identifying devices can be attached to them, but this of course requires catching live crows, who are among the wariest and most untrappable of birds. Technical difficulties aside, crow research is daunting because the ways of these birds are so complex and various. As preeminent generalists, members of this species ingeniously exploit a great range of habitats and resources, and they can quickly adjust to changes in their circumstances. Being so educable, individual birds have markedly different interests and inclinations, strategies and scams. For example, one pet crow  learned how to let a dog out of its kennel by pulling the pin on the door. When the dog escaped, the bird went into the kennel and ate its food.


The United States Constitution makes no provision for the nomination of candidates for the presidency. As the framers of the Constitution set up the system, the electors would, out of their own knowledge, select the "wisest and best" as President. But the rise of political parties altered that system drastically — and with the change came the need for nominations. The first method that the parties developed to nominate presidential candidates was the congressional caucus, a small group of members of Congress. That method was regularly used in the elections of 1800 to 1824. But its closed character led to its downfall in the mid-1820's. For the election of 1832, both major parties turned to the national convention as their nominating device. It has continued to serve them ever since. With the convention process, the final selection of the President is, for all practical purposes, narrowed to one of two persons: the Republican or the Democratic party nominee. Yet there is almost no legal control of that vital process. The Constitution is silent on the subject of presidential nominations. There is, as well, almost no statutory law on the matter. The only provisions in federal law have to do with the financing of conventions. And in each state there is only a small body of laws that deal with issues related to the convention, such as the choosing of delegates and the manner in which they may cast their votes. In short, the convention is very largely a creation and a responsibility of the political parties themselves. In both the Republican and Democratic parties, the national committee is charged with making the plans and arrangements for the national convention. As much as a year before it is held, the committee meets (usually in Washington, D.C.) to set the time and place for the convention. July has been the favored month; but each party has met in convention as early as mid-June and also as late as the latter part of August. Where the convention is held is a matter of prime importance. There must be an adequate convention hall, sufficient hotel accommodations, plentiful entertainment outlets, and efficient transportation facilities.


Each advance in microscopic technique has provided scientists with new perspective, on the function of living organisms and the nature of matter itself. The invention of the visible-light microscope late in the sixteenth century introduced a previously unknown realm of single-celled plants and animals. In the twentieth century, electron microscopes have provided direct views of viruses and minuscule surface structures. Now another type of microscope, one that utilizes X rays rather than light or electrons, offers a different way of examining tiny details; it should extend human perception still farther into the natural world. The dream of building an X-ray microscope dates to 1895; its development, however, was virtually halted in the 1940's because the development of the electron microscope was progressing rapidly. During the 1940's electron microscopes routinely achieved resolution better than that possible with a visible-light microscope, while the performance of X-ray microscopes resisted improvement. In recent years, however, interest in X-ray microscopes has revived, largely because of advances such as the development of new sources of X-ray illumination. As a result, the brightness available today is millions of times that of X-ray tubes, which, for most of the century, were the only available sources of soft X rays. The new X-ray microscopes considerably improve on the resolution provided by optical microscopes. They can also be used to map the distribution of certain chemical elements. Some can form pictures in extremely short times; others hold the promise of special capabilities such as three-dimensional imaging. Unlike conventional electron microscopy, X-ray microscopy enables specimens to be kept in air and in water, which means that biological samples can be studied under conditions similar to their natural state. The illumination used, so-called soft X rays in the wavelength range of twenty to forty angstroms (an angstrom is one ten-billionth of a meter), is also sufficiently penetrating to image intact biological cells in many cases. Because of the wavelength of the X rays used, soft X-ray microscopes will never match the highest resolution possible with electron microscopes. Rather, their special properties will make possible investigations that will complement those performed with light- and electron-based instruments.


Galaxies are the major building blocks of the universe. A galaxy is a giant family of many millions of stars, and it is held together by its own gravitational field. Most of the material universe is organized into galaxies of stars, together with gas and dust. There are three main types of galaxy: spiral, elliptical, and irregular. The Milky Way is a spiral galaxy: a flattish disc of star with two spiral arms emerging from its central nucleus. About one-quarter of all galaxies have this shape. Spiral galaxies are well supplied with the interstellar gas in which new stars form; as the rotating spiral pattern sweeps around the galaxy it compresses gas and dust, triggering the formation of bright young stars in its arms. The elliptical galaxies have a symmetrical elliptical or spheroidal shape with no obvious structure. Most of their member stars are very old and since ellipticals are devoid of interstellar gas, no new stars are forming in them. The biggest and brightest galaxies in the universe are ellipticals with masses of about1013 times that of the Sun; these giants may frequently be sources of strong radio emission, in which case they are called radio galaxies. About two-thirds of all galaxies are elliptical. Irregular galaxies comprise about one-tenth of all galaxies and they come in many subclasses. Measurement in space is quite different from measurement on Earth. Some terrestrial distances can be expressed as intervals of time: the time to fly from one continent to another or the time it takes to drive to work, for example. By comparison with these familiar yardsticks, the distances to the galaxies are incomprehensibly large, but they too are made more manageable by using a time calibration, in this case, the distance that light travels in one year. On such a scale the nearest giant spiral galaxy, the Andromeda galaxy, is two million light years away. The most distant luminous objects seen by telescopes are probably ten thousand million light years away. Their light was already halfway here before the Earth even formed. The light from the nearby Virgo galaxy set out when reptiles still dominated the animal world.


Under certain circumstances, the human body must cope with gases at greater-than normal atmospheric pressure. For example, gas pressures increase rapidly during a dive made with scuba gear because the breathing equipment allows divers to stay underwater longer and dive deeper. The pressure exerted on the human body increases by 1 atmosphere for every 10 meters of depth in seawater, so that at 30 meters in seawater a diver is exposed to a pressure of about 4 atmospheres. The pressure of the gases being breathed must equal the external pressure applied to the body; otherwise breathing is very difficult. Therefore all of the gases in the air breathed by a scuba diver at 40 meters are present at five times their usual pressure. Nitrogen, which composes 80 percent of the air we breathe, usually causes a balmy feeling of well-being at this pressure. At a depth of 5 atmospheres, nitrogen causes symptoms resembling alcohol intoxication, known as nitrogen narcosis. Nitrogen narcosis apparently results from a direct effect on the brain of the large amounts of nitrogen dissolved in the blood. Deep dives are less dangerous if helium is substituted for nitrogen, because under these pressures helium does not exert a similar narcotic effect. As a scuba diver descends, the pressure of nitrogen in the lungs increases. Nitrogen then diffuses from the lungs to the blood, and from the blood to body tissues. The reverse occurs when the diver surfaces; the nitrogen pressure in the lungs falls and the nitrogen diffuses from the tissues into the blood, and from the blood into the lungs. If the return to the surface is too rapid, nitrogen in the tissues and blood cannot diffuse out rapidly enough and nitrogen bubbles are formed. They can cause severe pains, particularly around the joints. Another complication may result if the breath is held during ascent. During ascent from a depth of 10 meters, the volume of air in the lungs will double because the air pressure at the surface is only half of what it was at 10 meters. This change in volume may cause the lungs to distend and even rupture. This condition is called air embolism. To avoid this event, a diver must ascend slowly, never at a rate exceeding the rise of the exhaled air bubbles, and must exhale during ascent.


In the two decades between 1929 and 1949, sculpture in the United States sustained what was probably the greatest expansion in sheer technique to occur in many centuries. There was, first of all, the incorporation of welding into sculptural practice, with the result that it was possible to form a new kind of metal object. For sculptors working with metal, earlier restricted to the dense solidity of the bronze cast, it was possible to add a type of work assembled from paper-thin metal sheets or sinuously curved rods. Sculpture could take the form of a linear, two-dimensional frame and still remain physically self-supporting. Along with the innovation of welding came a correlative departure: freestanding sculpture that was shockingly flat. Yet another technical expansion of the options for sculpture appeared in the guise of motion. The individual parts of a sculpture were no longer understood as necessarily fixed in relation to one another, but could be made to change position within a work constructed as a moving object. Motorizing the sculpture was only one of many possibilities taken up in the 1930's. Other strategies for getting the work to move involved structuring it in such a way that external forces, like air movements or the touch of a viewer, could initiate motion. Movement brought with it a new attitude toward the issue of sculptural unity: a work might be made of widely diverse and even discordant elements; their formal unity would be achieved through the arc of a particular motion completing itself through time. Like the use of welding and movement, the third of these major technical expansions to develop in the 1930's and 1940's addressed the issues of sculptural materials and sculptural unity. But its medium for doing so was the found object, an item not intended for use in a piece of artwork, such as a newspaper or metal pipe. To create a sculpture by assembling parts that had been fabricated originally for a quite different context did not necessarily involve a new technology. But it did mean a change in sculptural practice, for it raised the possibility that making sculpture might involve more a conceptual shift than a physical transformation of the material from which it is composed.


Barbed wire, first patented in the United States in 1867, played an important part in the development of American farming, as it enabled the settlers to make effective fencing to enclose their land and keep cattle away from their crops. This had a considerable effect on cattle ranching, since the herds no longer had unrestricted use of the plains for grazing, and the fencing led to conflict between the farmers and the cattle ranchers. Before barbed wire came into general use, fencing was often made from serrated wire, which was unsatisfactory because it broke easily when under strain, and could snap in cold weather due to contraction. The first practical machine for producing barbed wire was invented in 1874 by an Illinois farmer, and between then and the end of the century about 400 types of barbed wire were devised, of which only about a dozen were ever put to practical use. Modern barbed wire is made from mild steel, high-tensile steel, or aluminum. Mild steel and aluminum barbed wire have two strands twisted together to form a cable that is stronger than single-strand wire and less affected by temperature changes. Single strand wire, round or oval, is made from high-tensile steel with the barbs crimped or welded on. The steel wires used are galvanized – coated with zinc to make them rustproof. The two wires that make up the line wire or cable are fed separately into a machine at one end. They leave it at the other end twisted together and barbed. The wire to make the barbs is fed into the machine from the sides and cut to length by knives that cut diagonally through the wire to produce a sharp point. This process continues automatically, and the finished barbed wire is wound onto reels, usually made of wire, in lengths of 400 meters or in weights of up to 50 kilograms. A variation of barbed wire is also used for military purposes. It is formed into long coils or entanglements called concertina wire.


Archaeology has long been an accepted tool for studying prehistoric cultures. Relatively recently the same techniques have been systematically applied to studies of the more immediate past. This has been called “historical archaeology,” a term that is used in the United States to refer to any archaeological investigation into North American sites that postdate the arrival of Europeans. Back in the 1930's and 1940's, when building restoration was popular, historical archaeology was primarily a tool of architectural reconstruction. The role of archaeologist was to find the foundations of historic buildings and then take a back seat to architects. The mania for reconstruction had largely subsided by the 1950' sand 1960' s. Most people entering historical archaeology during this period came out of university anthropology departments, where they had studied prehistoric cultures. They were, by training, social scientists, not historians, and their work tended to reflect this bias. The questions they framed and the techniques they used were designed to help them understand, as scientists, how people behaved. But because they were treading on historical ground for which there was often extensive written documentation, and because their own knowledge of these periods was usually limited, their contributions to American history remained circumscribed. Their reports, highly technical and sometimes poorly written, went unread. More recently, professional archaeologists have taken over. These researchers have sought to demonstrate that their work can be a valuable tool not only of science but also of history, providing fresh insights into the daily lives of ordinary people whose existences might not otherwise be so well documented. This newer emphasis on archaeology as social history has shown great promise, and indeed work done in this area has lead to are interpretation of the United States past. In Kingston, New York, for example, evidence has been uncovered that indicates that English goods were being smuggled into that city at a time when the Dutch supposedly controlled trading in the area. And in Sacramento an excavation at the site of a fashionable nineteenth-century hotel revealed that garbage had been stashed in the building's basement despite sanitation laws to the contrary.


Even before the turn of the century, movies began to develop in two major directions: the realistic and the formalistic. Realism and formalism are merely general, rather than absolute, terms. When used to suggest a tendency toward either polarity, such labels can be helpful, but in the end they are still just labels. Few films are exclusively formalist in style, and fewer yet are completely realist. There is also an important difference between realism and reality, although this distinction is often forgotten. Realism is a particular style, whereas physical reality is the source of all the raw materials of film, both realistic and formalistic. Virtually all movie directors go to the photographable world for their subject matter, but what they do with this material — how they shape and manipulate it —determines their stylistic emphasis. Generally speaking, realistic films attempt to reproduce the surface of concrete reality with a minimum of distortion. In photographing objects and events, the filmmaker tries to suggest the copiousness of life itself. Both realist and formalist film directors must select(and hence emphasize) certain details from the chaotic sprawl of reality. But the element of selectivity in realistic films is less obvious. Realists, in short, try to preserve the illusion that their film world is unmanipulated, an objective mirror of the actual world. Formalists, on the other hand, make no such pretense. They deliberately stylize and distort their raw materials so that only the very naive would mistake a manipulated image of an object or event for the real thing. We rarely notice the style in a realistic movie; the artist tends to be self-effacing. Some filmmakers are more concerned with what is being shown than how it is manipulated. The camera is used conservatively. It is essentially a recording mechanism that reproduces the surface of tangible objects with as little commentary as possible. A high premium is placed on simplicity, spontaneity, and directness. This is not to suggest that these movies lack artistry, however, for at its best the realistic cinema specializes in art that conceals art.


The word laser was coined as an acronym for Light Amplification by the Stimulated Emission of Radiation. Ordinary light, from the Sun or a light bulb, is emitted spontaneously, when atoms or molecules get rid of excess energy by themselves, without any outside intervention. Stimulated emission is different because it occurs when an atom or molecule holding onto excess energy has been stimulated to emit it as light. Albert Einstein was the first to suggest the existence of stimulated emission in a paper published in 1917. However, for many years physicists thought that atoms and molecules always were much more likely to emit light spontaneously and that stimulated emission thus always would be much weaker. It was not until after the Second World War that physicists began trying to make stimulated emission dominate. They sought ways by which one atom or molecule could stimulate many others to emit light, amplifying it to much higher powers. The first to succeed was Charles H. Townes, then at Columbia University in New York. Instead of working with light, however, he worked with microwaves, which have a much longer wavelength, and built a device he called a “maser,” for Microwave Amplification by the Stimulated Emission of Radiation. Although he thought of the key idea in 1951, the first maser was not completed until a couple of years later. Before long, many other physicists were building masers and trying to discover how to produce stimulated emission at even shorter wavelengths. The key concepts emerged about 1957. Townes and Arthur Schawlow, then at Bell Telephone Laboratories, wrote a long paper outlining the conditions needed to amplify stimulated emission of visible light waves. At about the same time, similar ideas crystallized in the mind of Gordon Gould, then a 37-year-old graduate student at Columbia, who wrote them down in a series of notebooks. Townes and Schawlow published their ideas in a scientific journal, physical Review Letters, but Gould filed a patent application. Three decades later, people still argue about who deserves the credit for the concept of the laser.


A hoax of some note was apparently perpetrated on Appleton's Cyclopedia of American Biography, an important American biographical dictionary that was published in 1889.  This extensive and well-regarded reference was published with a number of biographies of scientists who most likely never existed or who never actually undertook the research cited in the biographical dictionary. It was not until some 30 years after Appleton's Cyclopedia was first published that word of the fake biographies began cropping up.  It was noted in a 1919 article in the Journal of the New York Botanical Garden that at least 14 of the biographies of botanists were fake.  Then, in 1937, an article in the American Historical Review  declared that at least 18 more biographies were false. The source of the false biographies is not known to this day, but a look at a number of steps in the process by which articles were submitted to the biographical encyclopedia sheds some light on how such a hoax could have occurred.  First, contributors were paid by the number and length of articles submitted, and the contributors themselves, as experts in their respective fields, were invited to suggest new names for inclusion. Then, the false biographies were created in such a way as to make verification of facts by the publisher extremely difficult in an era without the instantaneous communication of today: the false biographies were all about people who supposedly had degrees from foreign institutions and who had published their research findings in non-English language publications outside of the United States. Finally, the reference itself provides a long list of contributors but does not list which articles each of the contributors submitted, and, because the hoax was not discovered until well after the reference was first published, the publishing company no longer had records of who had submitted the false information. Unfortunately, the false information about historical research did not disappear with the final publication of the book.  Though it is now out of print, many libraries have copies of this comprehensive and, for the most part, highly useful reference.  Even more significant is the fact that a number of false citations from Appleton's Cyclopedia have cropped up in other reference sources and have now become part of the established chronicle of scientific and historical research.


The question of lunar formation has long puzzled astronomers.  It was once theorized that the moon formed alongside the earth as material in a swirling disk coalesced to form both bodies.  However, if both bodies formed simultaneously out of the same substance, we would expect the mean densities to be more or less identical.  In fact, this is not the case at all.  One of the most curious characteristics of the moon is that it is far less dense than the earth.  Compared to the earth's mean density, which is 5.5 times that of water, the density of the moon is a mere 3.3 times that of water.  Most of the earth's mass is located in its dense iron core, while the mantle and crust are composed primarily of lighter silicates.  The moon, on the other hand, is composed entirely of lighter substances.|An alternate explanation, the "capture theory," suggested that  the moon formed far away and was later captured by the earth.  The moon was once wandering in space, like an asteroid, unattached to a planet.  The rogue satellite veered too close to the earth and has since been tethered by the earth's gravitational field.  However, comparison of lunar and terrestrial isotopes has undermined this theory.  Isotopes are atomic indicators that leave a sort of geological fingerprint.  The isotopes from lunar rock samples indicate that both earth and moon came from the same source.|A more recent theory, the "impact theory" of lunar formation postulates that a large planet-like object, perhaps twice the mass of Mars, struck the earth at a shallow angle.  The object disintegrated a portion of the earth's crust and mantle, sending a cloud of silicate vapor into orbit around the earth.  In time, most of the material fell back to earth, while the rest coalesced into our moon.|Computer simulations (1997) by Robin Canup and Glen Stewart of the University of Colorado and by Shigeru Ida of the Tokyo Institute of Technology demonstrated that such a scenario is at least theoretically possible.  While the impact theory is attractive in that it explains both why the moon is less dense than the earth and how both bodies could have originated from the same source, it is not without problems. Impact from a Mars-sized body would produce an earth-moon system with twice as much angular momentum as that which is actually observed.  Therefore, although we are closer to resolving the question of lunar formation, the origin of the moon is still shrouded in mystery


Eugene O'Neill is regarded as one of the best, if not the best, of America's dramatists.  He was noted and well regarded during his life; however, it was after his demise that his works took their position of preeminence in the theater.|O'Neill experimented with various theatrical techniques before settling on the realistic style for which he received the most recognition.  In his earlier works, he incorporated such devices as transporting Greek myths into modern settings, having characters wear masks to show their feelings and emotions, and allowing the audience to hear characters' inner voices in addition to their actual conversations.|But it is generally for the realistic and often semi-autobiographical plays written later in his life that O'Neill is most renowned. {The Iceman Cometh {depicts the Greenwich Village milieu of friends and acquaintances, with their unrealistic and unrealized aspirations. { A Long Day's Journey into Night{ is a clearly autobiographical depiction of O'Neill's own family life.  {A Moon for the Misbegotten{ presents the final days of O'Neill's alcoholic brother.|Eugene O'Neill did not go unrecognized during his lifetime; the genius of his work was, in fact, well recognized.  In 1936, he was awarded the Nobel Prize for Literature, and over the years he received Pulitzer Prizes for four of his plays.|However, he did die largely forgotten.  Much of his best work, initially not considered commercially viable, achieved considerable prominence after his death.  {The Iceman Cometh,{ which was written in 1939, was revived in 1956, three years after O'Neill's passing.  The unanticipated success of {The Iceman Cometh{ led to the premiere of {A Long Day's Journey into Night.{  Considered by many to be O'Neill's masterpiece, {A Long Day's Journey into Night{ had actually been finished by the playwright in 1941, 12 years before his death, but it never graced the stage during his lifetime.


In young language learners, there is a critical period of time beyond which it becomes increasingly difficult to acquire a language.  Children generally attain proficiency in their first language by the age of five and continue in a state of relative linguistic plasticity until puberty.  Neurolinguistic research has singled out the lateralization of the brain as the reason for this dramatic change from fluidity to rigidity in language function.  Lateralization is the process by which the brain hemispheres become dominant for different tasks.  The right hemisphere of the brain controls emotions and social functions, whereas the left hemisphere regulates the control of analytical functions, intelligence, and logic.  For the majority of adults, language functions are dominant on the left side of the brain.  Numerous studies have demonstrated that it is nearly impossible to attain a nativelike accent in a second language, though some adults have overcome the odds, after lateralization is complete.|Cognitive development also affects language acquisition, but in this case adult learners may have some advantages over child learners.  Small children tend to have a very concrete, here-and-now view of the world around them, but at puberty, about the time that lateralization is complete, people become capable of abstract thinking, which is particularly useful for language learning. Abstract thinking enables learners to use language to talk about language.  Generally speaking, adults can profit from grammatical explanations, whereas children cannot.  This is evidenced by the fact that children are rather unreceptive to correction of grammatical features and instead tend to focus on the meaning of an utterance rather than its form.  However, language learning theory suggests that for both adults and children, optimal language acquisition occurs in a meaning-centered context.  Though children have the edge over adult language learners with respect to attaining a nativelike pronunciation, adults clearly have an intellectual advantage which greatly facilitates language learning.


Panel painting, common in thirteenth- and fourteenth-century Europe, involved a painstaking, laborious process. Wooden planks were joined, covered with gesso to prepare the surface for painting, and then polished smooth with special tools. On this perfect surface, the artist would sketch a composition with chalk, refine it with inks, and then begin the deliberate process of applying thin layers of egg tempera paint (egg yolk in which pigments are suspended) with small brushes. The successive layering of these meticulously applied paints produced the final, translucent colors. Backgrounds of gold were made by carefully applying sheets of gold leaf, and then embellishing or decorating the gold leaf by punching it with a metal rod on which a pattern had been embossed. Every step in the process was slow and deliberate. The quick-drying tempera demanded that the artist know exactly where each stroke be placed before the brush met the panel, and it required the use of fine brushes. It was, therefore, an ideal technique for emphasizing the hard linear edges and pure, fine areas of color that were so much a part of the overall aesthetic of the time. The notion that an artist could or would dash off an idea in a fit of spontaneous inspiration was completed alien to these deliberately produced works. Furthermore, making these paintings was so time-consuming that it demanded assistance. All such work was done by collective enterprise in the workshops. The painter or master who is credited with having created the painting may have designed the work and overseen its production, but it is highly unlikely that the artist's hand applied every stroke of the brush. More likely, numerous assistants, who had been trained to imitate the artist's style, applied the paint. The carpenter's shop probably provided the frame and perhaps supplied the panel, and yet another shop supplied the gold. Thus, not only many hands, but also many shops were involved in the final product. In spite of problems with their condition, restoration, and preservation many panel paintings have survived, and today many of them are housed in museum collections.


Hotels were among the earliest facilities that bound the United States together. They were both creatures and creators of communities, as well symptoms of the frenetic quest for community. Even in the first part of the nineteenth century, Americans were already forming the habit of gathering from all corners of the nation for both public and private, business and pleasure, purposes. Conventions were the new occasions, and hotels were distinctively American facilities making conventions possible. The first national convention of a major party to choose a candidate for President (that of the National Republican party, which met on December 12, 1831, and nominated Henry Clay for President) was held in Baltimore, at a hotel that was then reputed to be the best in the country. The presence in Baltimore of Barnum's City Hotel, a six-story building with two hundred apartments, helps explain why many other early national political conventions were held there. In the longer run, American hotels made other national conventions not only possible but pleasant and convivial. The growing custom of regularly assembling from afar the representatives of all kinds of groups – not only for political conventions, but also for commercial, professional, learned, and avocations ones – in turn supported the multiplying hotels. By the mid-twentieth century, conventions accounted for over a third of the yearly room occupancy of all hotels in the nation; about eighteen thousand different conventions were held annually with a total attendance of about ten million persons. Nineteenth-century American hotelkeepers, who were no Ionger the genial, deferential "hosts" of the eighteenth-century European inn, became leading citizens. Holding a large stake in the community, they exercised power to make it prosper. As owners or managers of the local "palace of the public,” they were makers and shapers of a principal community attraction. Travelers from abroad were mildly shocked by this high social position.


In the world of birds, bill design is a prime example of evolutionary fine-tuning. Shorebirds such as oystercatchers use their bills to pry open the tightly sealed shells of their prey; hummingbirds have stiletto-like bills to probe the deepest nectar-bearing flowers; and kiwis smell out earthworms thanks to nostrils located at the tip of their beaks. But few birds are more intimately tied to their source of sustenance than are crossbills. Two species of these finches, named for the way the upper and lower parts of their bills cross, rather than meet in the middle, reside in the evergreen forests of North America and feed on the seeds held within the cones of coniferous trees. The efficiency of the bill is evident when a crossbill locates a cone. Using a lateral motion of its lower mandible, the bird separates two overlapping scales on the cone and exposes the seed. The crossed mandibles enable the bird to exert a powerful biting force at the bill tips, which is critical for maneuvering them between the scales and spreading the scales apart. Next, the crossbill snakes its long tongue into the gap and draws out the seed. Using the combined action of the bill and tongue, the bird cracks open and discards the woody seed covering and swallows the nutritious inner kernel. This whole process takes but a few seconds and is repeated hundreds of times a day. The bills of different crossbill species and subspecies vary – some are stout and deep, others more slender and shallow. As a rule, large-billed crossbills are better at securing seeds from large cones, while small-billed crossbills are more deft at removing the seeds from small, thin-scaled cones. Moreover, the degree to which cones are naturally slightly open or tightly closed helps determine which bill design is the best. One anomaly is the subspecies of red crossbill known as the Newfoundland crossbill. This bird has a large, robust bill, yet most of Newfoundland's conifers have small cones, the same kind of cones that the slender-billed white-wings rely on.


Another type of lizard, Jackson's chameleon is a remarkable model of adaptability, one whose ability to adjust to varying environments exceeds that of other members of its species.  True to the reputation of the species, Jackson's chameleon is a master of camouflage.  Special skin cells called chromatophores enable the chameleon to change the pigment in its skin rapidly and escape detection.  While the lizard is stalking its prey, it moves very slowly, in a deliberate rocking gait so as to appear to be a part of a branch moved by a gentle breeze.  Jackson's chameleon also has the ability to change the shape of its body.  By elongating itself, it can look like a twig; by squeezing its sides laterally, it can appear flattened like a leaf.  These camouflaging techniques also help the chameleon to escape detection from predators.|The color change that is characteristic of all chameleons is not solely for the purpose of camouflage.  Jackson's chameleon, like all lizards, is an ectotherm that depends on the sun to maintain its body temperature.  By changing to a darker color in the morning hours, it can absorb more heat.  Once it has reached its optimal body temperature of 77 degrees Fahrenheit (25 degrees Celsius), it changes to a paler hue.  Through color change, the chameleon can also communicate its mood to other members of the species.|Jackson's chameleon has further exemplified its adaptive nature in a way that surpasses other chameleons in its noteworthy migration to a new home: it has become a well-established resident of the Hawaiian Islands even though it is indigenous to the highland rain forests of Kenya and Tanzania. As the story goes, back in 1972 a pet shop owner on the island of Oahu imported several dozen Jackson's chameleons to be sold as pets.  When the shipment arrived, the reptiles were emaciated and dehydrated, so the pet shop owner released the lizards into his lush garden, assuming that he could recapture them after they had revived.  The chameleons escaped and spread throughout the island, where they thrived in the moist, well-planted tropical flora.  Relishing the habitat of secondary growth forest, agricultural areas, and even residential gardens, the chameleon found a ready-made home in its adopted environment.|Jackson's chameleons continued their unsolicited migration to other islands in the chain as the popular lizards were captured by hikers and other visitors to the island, who took them home and released them in their gardens.  This is now a truly ubiquitous lizard; that is, it is now commonplace on all the major Hawaiian Islands.






Caleb Bradham, called "Doc" Bradham by friends and acquaintances, was the owner of a pharmacy at the end of the nineteenth century.  In his pharmacy, Doc Bradham had a soda fountain, as was customary in pharmacies of the time.  He took great pleasure in creating new and unusual mixtures of drinks for customers at the fountain.

Like many other entrepreneurs of the era, Doc Bradham wanted to create a cola drink to rival Coca-Cola.  By 1895, Coca-Cola was a commercial success throughout the United States, and numerous innovators were trying to come up with their own products to cash in on the success that Coca-Cola was beginning to experience.  In his pharmacy, Doc Bradham developed his own version of a cola drink, and Doc's drink became quite popular at his soda fountain.  The drink he created was made with a syrup consisting of sugar, essence of vanilla, cola nuts, and other flavorings.  The syrup was mixed at the soda fountain with carbonated water before it was served.

The drink that Doc Bradham created was originally called "Brad's drink" by those in his hometown of New Bern who visited the soda fountain and sampled his product.  Those who tasted the drink claimed not only that it had a refreshing and invigorating quality but also that it had a medicinal value by providing relief from dyspepsia, or upset stomach.  From this reputed ability to relieve dyspepsia, Doc Bradham created the name of Pepsi-Cola for his drink.  Doc Bradham eventually made the decision to mass market his product, and in 1902 he founded the Pepsi-Cola Company.  The advertising for this new product, of course, touted the drink as an "invigorating drink" that "aids digestion."



Susan B. Anthony, teacher by trade and lifelong champion of women's rights, participated throughout her adulthood in the effort to improve the status of women in the United States and to gain rights for them.  She was a stalwart champion of economic independence for women, recognizing the importance of financial independence to the emancipation of women.

She worked tirelessly from 1854 to 1860 in the fight to obtain rights for married women in the state of New York.  She spoke out on the issues, organized volunteers, and circulated petitions asking that married women be granted the rights to own property, to earn wages, to have custody of their children in the event of divorce, and to vote.  In 1860, the landmark Married Women's Property Act, which conferred all of the above rights except the right to vote, was passed in the state of New York.  It became law in large part due to the efforts of Anthony.

In the years following the Civil War, Anthony dedicated her life to obtaining the right to vote for women.  She believed that the right to vote should be a federal right and should not be left up to the discretionary authority of each individual state.  Throughout the rest of her career, she worked on the national level toward the accomplishment of this goal.

Unfortunately, Anthony did not live to see her dream accomplished.  In 1900, at the age of 80, she retired from the presidency of the National American Woman Suffrage Association without having fulfilled her goal of achieving the vote for women but with the undertaking well underway.

When the vote was finally granted to women in the United States in 1920, Anthony was given credit as a primary contributor in the accomplishment of this monumental change in social structure.  In recognition of Anthony's accomplishments, the United States government issued a one-dollar coin in her honor in 1979.  The front side of the coin contained a side view of Anthony's face and was inscribed across the top with the word "liberty."


In 1796, George Washington, the first president of the United States, resigned after completing two four-year terms in office.  He had remained in the service of his country until he was assured that it could continue and succeed without his leadership.  John Adams took over Washington's position as president in a smooth and bloodless change of power that was unusual for its time.|By the end of Washington's presidency, the American government had been established.  The three branches of government had been set up and were in working order.  The debt had been assumed, and funds had been collected;  treaties with major European powers had been signed, and challenges to the new government authorities had been firmly met.  However, when Washington left office, there were still some unresolved problems.  Internationally, France was in turmoil and on the brink of war; domestically, the contest for political control was a major concern.  In addition, there was still some resistance to governmental policies.|It was within this context that Washington made his farewell address to the nation.  In the address published in a Philadelphia newspaper, Washington advised his fellow politicians to base their views and decisions on the bedrock of enduring principles.  He further recommended a firm adherence to the Constitution because he felt that this was necessary for the survival of the young country.  He asked that credit be used sparingly and expressed concerns about the unity, the independence, and the future of the young country.  In regard to relations with foreign powers, he encouraged the country not to be divided by the conflicts in Europe.  Stating that foreign influences were the foe of the republican government, he maintained that relations were to be strictly commercial and not political.  He pleaded with the American public to guard their freedoms jealously.  Finally, he reminded all citizens of the need for religion and morality and stated his belief that one cannot have one without the other.




.




Polymorphs are minerals with a common composition but distinct internal structures.  Polymorphs exist because of the widely varying physical conditions under which minerals are formed.  Minerals can be formed in the fierce heat well below the earth's surface or in cold, damp domains much closer to the surface.  Most of the elements that make up minerals are widely distributed throughout the planet's crust.  Compounds with like chemical compositions can be created in different physical settings, resulting in compounds with two or more strongly differentiated internal structures, each of which is stable in a different physical setting.  These related minerals with unlike crystal structures are known as polymorphs, which means "several forms."|A commonly cited illustration is carbon, which has four known polymorphs.  Two of the polymorphs of carbon, chaoite and lonsdaleite, are quite rare and have only been found in meteorites. The most widespread of carbon's four polymorphs is graphite, which forms loosely bonded crystals at relatively low temperatures and pressures.  Much of the earth's crust is conducive to the formation of graphite, making graphite the most pervasive of carbon's polymorphs.  Diamond is another polymorph of carbon, one that requires the high temperatures and pressures deep within the earth's crust to form.  Diamond forms at depths lower than 150 kilometers below the earth's surface, at temperatures higher than 1,000 degrees centigrade, and at pressures greater than 50,000 times the pressure on the surface of the earth.  Diamond is brought closer to the surface of the earth when gas-rich magmas from deep in the earth's mantle erupt through cracks known as diatremes, or diamond pipes.  Because of its formation so deep in the earth, diamond forms extremely hard crystals and is the most compact and strongly bonded of the four polymorphs of carbon.


The era of modern sports began with the first Olympic games in 1896, and since the dawn of this new era, women have made great strides in the arena of running.  In the early years, female runners faced numerous restrictions in the world of competitive running.  Even though women were banned from competing in the 1896 Olympics, one Greek woman ran unofficially in the men's marathon.  She had to stop outside the Olympic stadium, finishing with a time of 4 hours and 30 minutes.  Four years later, women were still prohibited from Olympic competition because, according to the International Olympic Committee, it was not appropriate for women to compete in any event that caused them to sweat.  In the 1928 Olympics, women were finally granted permission to compete in running events.  However, because some of the participants collapsed at the finish of the 800-meter race, it was decided to limit women runners to races of 200 meters or less in the Olympics four years later.  The women's 800-meter race was not reintroduced to the Olympic games until 1960.  Over a decade later, in 1972, the 1500-meter race was added.  It was not until 1984 that the women's marathon was made an Olympic event.|Before 1984, women had been competing in long-distance races outside of the Olympics.  In 1963, the first official women's marathon mark of 3 hours and 27 minutes was set by Dale Greig.  Times decreased until 1971, when Beth Bonner first broke the three-hour barrier with a time of 2:55. A year later, President Nixon signed the Title IX law, which said that no person could be excluded from participating in sports on the basis of sex.  This was a turning point in women's running and resulted in federal funding for schools that supported women athletes.  In 1978, Greta Waitz set a new world marathon record of 2:32 at the New York City Marathon.  Joan Benoit broke that record by ten minutes in 1983 and went on to win the first-ever women's Olympic marathon in 1984.
































What yoga does to your body and brain?


Between the 1st and 5th century C.E., hindu sage Patanjali recorded Indian meditative tradition in 196 manuals called the Yoga Sutras. He defined yoga as restraining of mind from focusing on external things in order to achieve full consciousness. According to him, there are 3 core approaches to yoga: Physical posture, Breathing exercise and Spiritual contemplation.
For a long time scientists have been studying the effects of yoga. However, it is hard to make specific claims. Since experiments are performed on small groups of people, results are lacking diversity. Results are, also, subjective due to self reporting. Moreover, because of the unique set of yoga activities, it is hardly possible to determine what activity produces what health benefit.
Nevertheless there are some benefits that have scientific support. Firstly, it is proven that yoga increases strength and flexibility. Stretching, through yoga postures, brings water to muscles. As a result, stem cells, which differentiate in muscle and tissue cells, are produced. Also, yoga has a great therapeutic effect on musculo-skeletal disorders. It reduces pain and improves mobility.
Furthermore, yoga is proven to have benefits on the lung and heart. People with lung disorders such as asthma and bronchitis, have shrunk passageways that carry oxygen and weakened membrane that transfers oxygen into blood. Yoga breathing relaxes lung muscles and thus increases oxygen diffusion. Higher concentration of oxygen in blood benefits the heart, lowering blood pressure and reducing risk of cardiovascular diseases.
Moreover, yoga has remarkable psychological effects. Although the most commended yoga benefit, due to lack of evidence, it is the hardest one to prove. It is strongly believed that yoga helps treat depression and anxiety. But, because of the wide variety of diagnosis, origines and severity of those diseases it is hard to determine yoga’s exact impact.


The benefits of a good night sleep
Although sleep occupies nearly ⅓ of our lives, we pay little attention to it. The reason for this neglect is probably a common misconception that sleep lost is time. In fact, during sleep our vital systems are regulated and balanced. Furthermore, sleep restructures our brain, thus positively affecting our memory.
It was scientifically proven a long time ago that we forget 40% of information during the first 20 minutes. This process can be inhibited through memory consolidation. During the consolidation, with the help from the hippocampus, information is transferred from short term to long term memory.
The importance of hippocampus in the memory processes was first proved by scientist Brenda Milner. She discovered that after having his hippocampus removed, the patient's ability to store both short and long term memory significantly decreased. However, he was able to learn physical tasks after repetition. This revealed that the hippocampus was involved in the consolidation of long term declarative memory(facts), but not in the consolidation of procedural memory.
As the technology developed, our understanding of the consolidation process improved. First, data is captured in neurons. Then, neurons travel to the hippocampus where, during neuroplasticity, synaptic buds are formed, strengthening the neural network where the information will be returned as long-term memory.
In some situations in life memory consolidation is better than in others. For example, we better remember information received during stress or some other high emotional state. This can be explained with the link hippocampus has with emotion. But one of the major factors contributing to memory consolidation is sleep.
Sleep is divided in 4 stages, deepest of whom are Slow Wave Sleep and Rapid Eye Movement (REM). EEG machines monitoring stages of sleep discovered impulses traveling between the brainstem, hippocampus, thalamus and cortex. Thus, scientists discovered what type of memory is consolidated by every stage.
During Slow Wave Sleep stage declarative memory is encoded in temporary storage in the interior part of the hippocampus. There, through the continuous dialog of the hippocampus and cortex, it is reactivated and as a result stored in long term storage. In the REM phase, procedural memory is consolidated.


EFFECTS OF ACID RAIN

Acid rain has been linked to widespread environmental damages. In soil, acid rain dissolves and washes away nutrients needed by plants. It can also dissolve toxic substances, such as aluminum and mercury, which are naturally present in some soils, freeing these toxins to pollute water or to poison plants that absorb them. Acid rain slows the growth of plants, especially trees. It also attacks trees more directly by eating holes in the waxy coating of leaves and needles, causing brown dead spots. If many such spots form, a tree loses some of its ability to make food through photosynthesis. Also, organisms that cause disease can infect the tree through its injured leaves. Most farm crops are less affected by acid rain than are forests. The deep soils of many farm regions can absorb and neutralize large amounts of acid. Mountain farms are more at risk. The thin soils in these higher elevations cannot neutralize so much acid. Excessive amounts of nutrients can be leached out of the soil because of acid rains. Acid rain falls into and drains into streams, lakes, and marshes. Where there is snow cover in winter, local waters grow suddenly more acidic when the snow melts in the spring. Most natural waters are close to chemically neutral, neither acidic nor alkaline, with pH between 6 and 8. In some lakes, the water now has a pH value of less than 5 as a result of acid rain. The effects of acid rain on wildlife can be far-reaching. If a population of one plant or animal is adversely affected by acid rain, animals that feed on that organism may also suffer. Ultimately, an entire ecosystem may become endangered. Acid rain and the dry deposition of acidic particles damage buildings, statues, automobiles, and other structures made of stone, metal, or any other material exposed to weather for long periods. The corrosive damage can be expensive and, in cities with very historic buildings, like the Parthenon in Greece and the Taj Mahal in India, tragic. The acidification of surface waters causes little direct harm to people. It is safe to swim in even the most acidified lakes. However, toxic substances leached from soil can pollute local water supplies, which can irritate the lungs and make breathing difficult, especially for people who already have asthma, bronchitis, or other respiratory diseases. Acid pollution has one surprising effect that may be beneficial. Sulfates in the upper atmosphere reflect some sunlight out into space, and thus tend to slow down global warming. Scientists believe that acid pollution may have delayed the onset of warming by several decades in the middle of the 20th century.


Today’s shopping mall has as its antecedents historical marketplaces, such as Greek agoras, European piazzas, and Asian bazaars. The purpose of these sites, as with the shopping mall, is both economic and social. People go not only to buy and sell wares, but also to be seen, catch up on news, and be part of the human drama. Both the marketplace and its descendant the mall might also contain restaurants, banks, theaters, and professional offices. The mall is also the product of the creation of suburbs. Although villages outside of cities have existed since antiquity, it was the technological and transportation advances of the 19th century that gave rise to a conscious exodus of the population away from crowded, industrialized cities toward quieter, more rural towns. Since the suburbs typically have no centralized marketplace, shopping centers or malls were designed to fill the needs of the changing community, providing retail stores and services to an increasing suburban population. The shopping mall differs from its ancient counterparts in a number of important ways. While piazzas and bazaars were open-air venues, the modern mall is usually enclosed. Since the suburbs are spread out geographically, shoppers drive to the mall, which means that parking areas must be an integral part of a mall’s design. Ancient marketplaces were often set up in public spaces, but shopping malls are designed, built, and maintained by a separate management firm as a unit. The first shopping mall was built by J. C. Nichols in 1922 near Kansas City, Missouri. The Country Club Plaza was designed to be an automobile-centered plaza, as its patrons drove their own cars to it, rather than take mass transportation as was often the case for city shoppers. It was constructed according to a unified plan, rather than as a random group of stores. Nichols’ company owned and operated the mall, leasing space to a variety of tenants. The first enclosed mall was the Galleria Vittoria Emanuele in Milan, Italy in 1865–77. Inspired by its design, Victor Gruen took the shopping and dining experience of the Galleria to a new level when he created the Southdale Center Mall in 1956. Located in a suburb of Minneapolis, it was intended to be a substitute for the traditional city center. The 95-acre, two-level structure had a constant climate-controlled temperature of 72 degrees, and included shops, restaurants, a school, a post office, and a skating rink. Works of art, decorative lighting, fountains, tropical plants, and flowers were placed throughout the mall. Southdale afforded people the opportunity to experience the pleasures of urban life while protected from the harsh Minnesota weather. In the 1980s, giant megamalls were developed. While Canada has had the distinction of being home to the largest of the megamalls for over twenty years, that honor will soon go to Dubai, where the Mall of Arabia is being completed at a cost of over five billion U.S. dollars. The 5.3 million square foot West Edmonton Mall in Alberta, Canada, opened in 1981, with over 800 stores, 110 eating establishments, a hotel, an amusement park, a miniature-golf course, a church, a zoo, and a 438-foot-long lake. Often referred to as the “eighth wonder of the world,” the West Edmonton Mall is the number-one tourist attraction in the area, and will soon be expanded to include more retail space, including a facility for sports, trade shows, and conventions. The largest enclosed megamall in the United States is Bloomington, Minneapolis’s Mall of America, which employs over 12,000 people. It has over five hundred retail stores, an amusement park which includes an indoor roller coaster, a walk-through aquarium, a college, and a wedding chapel. The mall contributes over one billion dollars each year to the economy of the state of Minnesota. Its owners have proposed numerous expansion projects, but have been hampered by safety concerns due to the mall’s proximity to an airport.



Kala namak is a kiln-fired rock salt with a sulphurous, pungent smell used in the Indian subcontinent. It is also known as "Himalayan black salt", Sulemani namak, bit noon, bire noon, bit loona, bit lobon, kala loon, sanchal, guma loon, or pada loon, and is manufactured from the salts mined in the regions surrounding the Himalayas. The condiment is composed largely of sodium chloride, with several other components lending the salt its colour and smell. The smell is mainly due to its sulfur content. Because of the presence of Greigite (Fe3S4, Iron(II,III) sulfide) in the mineral, it forms brownish-pink to dark violet translucent crystals when whole. When ground into a powder, its colour ranges from purple to pink. Kala namak has been praised in Ayurveda and used for its perceived medical qualities. Production  The raw material for producing Kala Namak was originally obtained from natural halite from mines in Northern India in certain locations of the Himalayas,[ salt harvested from the North Indian salt lakes of Sambhar or Didwana. Traditionally, the salt was transformed from its relatively colourless raw natural forms into the dark coloured commercially sold kala namak through a reductive chemical process that transforms some of the naturally occurring sodium sulfate of the raw salt into pungent hydrogen sulfide and sodium sulfide. This involves firing the raw salts in a kiln or furnace for 24 hours while sealed in a ceramic jar with charcoal along with small quantities of harad seeds, amla, bahera, babul bark, or natron. The fired salt melts, the chemical reaction occurs, and the salt is then cooled, stored, and aged prior to sale.[7][3] Kala namak is prepared in this manner in northern India with production concentrated in Hisar district, Haryana. The salt crystals appear black and are usually ground to a fine powder that is purple. Although the Kala Namak may have traditionally been chemically produced from impure deposits of salt (sodium chloride) with the required chemicals (small quantities of sodium sulfate, sodium bisulfate and ferric sulfate) and charcoal in a furnace it is now common to simply add the required chemicals to pure salt before firing. Reportedly, it is also possible to create similar products through reductive heat treatment of salt, 5–10% of sodium carbonate, sodium sulfate, and some sugar. Composition Kala namak consists primarily of sodium chloride and trace impurities of sodium sulfate, sodium bisulfate, sodium bisulfite, sodium sulfide, iron sulfide and hydrogen sulfide. Sodium chloride provides kala namak with its salty taste, iron sulfide provides its dark violet hue, and all the sulfur compounds give kala namak its slight savory taste as well as a highly distinctive smell, with hydrogen sulfide being the most prominent contributor to the smell. The acidic bisulfates/bisulfites contribute a mildly sour taste. Although hydrogen sulfide is toxic in high concentrations, the amount present in kala namak used in food is small and thus its effects on health are negligible. Uses  Kala namak is used extensively in South Asian cuisines of India, Pakistan, Bangladesh and Nepal as a condiment or added to chaats, chutneys, salads, fruit, raitas and many other savory snacks. Chaat masala, a South Asian spice blend, is dependent upon black salt for its characteristic sulfurous egg-like aroma. Those who are not accustomed to black salt often describe the smell as resembling flatulence. Black salt is sometimes used sparingly as a topping for fruits or snacks in North India and Pakistan. Kala Namak is sometimes applied to tofu in vegan egg recipes. Kala namak is considered a cooling spice in Ayurveda and is used as a laxative and digestive aid. It is also been noted to relieve flatulence and heartburn. It is used in Jammu to cure goitres This salt is also used to treat hysteria and for making toothpastes by combining it with other mineral and plant ingredients.[3] The uses for goitre and hysteria are dubious. Goitre, due to dietary iodine deficiency, would not be remedied unless iodide was present in the natural salt. In the United States, the Food and Drug Administration warned a manufacturer of dietary supplements, including one consisting of Himalayan salt, to discontinue marketing the products using unproven claims of health benefits



Kala namak is a kiln-fired rock salt with a sulphurous, pungent smell used in the Indian subcontinent. It is also known as "Himalayan black salt", Sulemani namak, bit noon, bire noon, bit loona, bit lobon, kala loon, sanchal, guma loon, or pada loon, and is manufactured from the salts mined in the regions surrounding the Himalayas. The condiment is composed largely of sodium chloride, with several other components lending the salt its colour and smell. The smell is mainly due to its sulfur content. Because of the presence of Greigite (Fe3S4, Iron(II,III) sulfide) in the mineral, it forms brownish-pink to dark violet translucent crystals when whole. When ground into a powder, its colour ranges from purple to pink. Kala namak has been praised in Ayurveda and used for its perceived medical qualities. 

Production

 The raw material for producing Kala Namak was originally obtained from natural halite from mines in Northern India in certain locations of the Himalayas,[ salt harvested from the North Indian salt lakes of Sambhar or Didwana. Traditionally, the salt was transformed from its relatively colourless raw natural forms into the dark coloured commercially sold kala namak through a reductive chemical process that transforms some of the naturally occurring sodium sulfate of the raw salt into pungent hydrogen sulfide and sodium sulfide. This involves firing the raw salts in a kiln or furnace for 24 hours while sealed in a ceramic jar with charcoal along with small quantities of harad seeds, amla, bahera, babul bark, or natron. The fired salt melts, the chemical reaction occurs, and the salt is then cooled, stored, and aged prior to sale.[7][3] Kala namak is prepared in this manner in northern India with production concentrated in Hisar district, Haryana. The salt crystals appear black and are usually ground to a fine powder that is purple. Although the Kala Namak may have traditionally been chemically produced from impure deposits of salt (sodium chloride) with the required chemicals (small quantities of sodium sulfate, sodium bisulfate and ferric sulfate) and charcoal in a furnace it is now common to simply add the required chemicals to pure salt before firing. Reportedly, it is also possible to create similar products through reductive heat treatment of salt, 5–10% of sodium carbonate, sodium sulfate, and some sugar. 

Composition 

Kala namak consists primarily of sodium chloride and trace impurities of sodium sulfate, sodium bisulfate, sodium bisulfite, sodium sulfide, iron sulfide and hydrogen sulfide. Sodium chloride provides kala namak with its salty taste, iron sulfide provides its dark violet hue, and all the sulfur compounds give kala namak its slight savory taste as well as a highly distinctive smell, with hydrogen sulfide being the most prominent contributor to the smell. The acidic bisulfates/bisulfites contribute a mildly sour taste. Although hydrogen sulfide is toxic in high concentrations, the amount present in kala namak used in food is small and thus its effects on health are negligible. 

Uses

 Kala namak is used extensively in South Asian cuisines of India, Pakistan, Bangladesh and Nepal as a condiment or added to chaats, chutneys, salads, fruit, raitas and many other savory snacks. Chaat masala, a South Asian spice blend, is dependent upon black salt for its characteristic sulfurous egg-like aroma. Those who are not accustomed to black salt often describe the smell as resembling flatulence. Black salt is sometimes used sparingly as a topping for fruits or snacks in North India and Pakistan. Kala Namak is sometimes applied to tofu in vegan egg recipes. Kala namak is considered a cooling spice in Ayurveda and is used as a laxative and digestive aid. It is also been noted to relieve flatulence and heartburn. It is used in Jammu to cure goitres This salt is also used to treat hysteria and for making toothpastes by combining it with other mineral and plant ingredients.[3] The uses for goitre and hysteria are dubious. Goitre, due to dietary iodine deficiency, would not be remedied unless iodide was present in the natural salt. In the United States, the Food and Drug Administration warned a manufacturer of dietary supplements, including one consisting of Himalayan salt, to discontinue marketing the products using unproven claims of health benefits


Petroleum, also known as crude oil or simply oil, is a naturally occurring yellowish-black liquid mixture of mainly hydrocarbons,[1] and is found in geological formations. The name petroleum covers both naturally occurring unprocessed crude oil and petroleum products that consist of refined crude oil. Petroleum is primarily recovered by oil drilling. Drilling is carried out after studies of structural geology, sedimentary basin analysis, and reservoir characterisation. Unconventional reserves such as oil sands and oil shale exist. Once extracted, oil is refined and separated, most easily by distillation, into innumerable products for direct use or use in manufacturing. Products include fuels such as petrol (gasoline), diesel, kerosene and jet fuel; asphalt and lubricants; chemical reagents used to make plastics; solvents, textiles, refrigerants, paint, synthetic rubber, fertilizers, pesticides, pharmaceuticals, and thousands of others. Petroleum is used in manufacturing a vast variety of materials essential for modern life,[2] and it is estimated that the world consumes about 100 million barrels (16 million cubic metres) each day. Petroleum production can be extremely profitable and was critical to global economic development in the 20th century, with some countries, so-called "oil states", gaining significant economic and international power because of their control of oil production. Petroleum exploitation can be damaging to the environment and human health. Extraction, refining and burning of petroleum fuels all release large quantities of greenhouse gases, so petroleum is one of the major contributors to climate change. Other negative environmental effects include direct releases, such as oil spills, and as well as air and water pollution at almost all stages of use. These environmental effects have direct and indirect health consequences for humans. Oil has also been a source of internal and inter-state conflict, leading to both state-led wars and other resource conflicts. Production of petroleum is estimated to reach peak oil before 2035[3] as global economies lower dependencies on petroleum as part of climate change mitigation and a transition towards renewable energy and electrification.[4] Oil has played a key role in industrialization and economic development.[5]