Veoma teški tekstovi

 Milankovitch proposed in the early twentieth century that the ice ages were caused by variations in the Earth’s orbit around the Sun. For sometime this theory was considered untestable, largely because there was no sufficiently precise chronology of the ice ages with which the orbital variations could be matched. To establish such a chronology it is necessary to determine the relative amounts of land ice that existed at various times in the Earth’s past. A recent discovery makes such a determination possible: relative land-ice volume for a given period can be deduced from the ratio of two oxygen isotopes, 16 and 18, found in ocean sediments. Almost all the oxygen in water is oxygen 16, but a few molecules out of every thousand incorporate the heavier isotope 18. When an ice age begins, the   continental ice sheets grow, steadily reducing the amount of water evaporated from the ocean that will eventually return to it. Because heavier isotopes tend to be left behind when water evaporates from the ocean surfaces, the remaining ocean water becomes progressively enriched in oxygen 18. The degree of enrichment can be determined by analyzing ocean sediments of the period, because these sediments are composed of calcium carbonate shells of marine organisms, shells that were  constructed with oxygen atoms drawn from the sur- rounding ocean. The higher the ratio of oxygen 18 to oxygen 16 in a sedimentary specimen, the more land ice there was when the sediment was laid down.  As an indicator of shifts in the Earth’s climate, the  isotope record has two advantages. First, it is a global record: there is remarkably little variation in isotope ratios in sedimentary specimens taken from different continental locations. Second, it is a more continuous record than that taken from rocks on land. Because of these advantages, sedimentary evidence can be dated with sufficient accuracy by radiometric methods to establish a precise chronology of the ice ages. The dated isotope record shows that the fluctuations in global ice volume over the past several hundred thousand years  have a pattern: an ice age occurs roughly once every 100,000 years. These data have established a strong connection between variations in the Earth’s orbit and the periodicity of the ice ages. However, it is important to note that other factors, such as volcanic particulates or variations in the amount of sunlight received by the Earth, could potentially have affected the climate. The advantage of the Milankovitch theory is that it is testable: changes in the Earth’s orbit can be calculated and dated by applying Newton’s laws of gravity to progressively earlier configurations of the bodies in the solar system. Yet the lack of information about other possible factors affecting global climate does not make them unimportant. 


For millennia, the coconut has been central to the lives of Polynesian and Asian peoples. In the western world, on the other hand, coconuts have always been exotic and unusual, sometimes rare. The Italian merchant traveller Marco Polo apparently saw coconuts in South Asia in the late 13th century, and among the mid-14th-century travel writings of Sir John Mandeville there is mention of ‘great Notes of Ynde’ (great Nuts of India). Today, images of palm-fringed tropical beaches are cliches in the west to sell holidays, chocolate bars fizzy drinks and even romance.Typically, we envisage coconuts as brown cannonballs that, when opened, provide sweet white flesh. But we see only part of the fruit and none of the plant from which they come. The coconut palm has a smooth, slender, grey trunk, up to 30 metres tall. This is an important source of timber for building houses, and is increasingly being used as a replacement for endangered hardwoods in the furniture construction industry. The trunk is surmounted by a rosette of leaves, each of which may be up to six metres long. The leaves have hard veins in their centres which, in many parts of the world, are used as brushes after the green part of the leaf has been stripped away. Immature coconut flowers are tightly clustered together among the leaves at the top of the trunk. The flower stems may be tapped for their sap to produce a drink, and the sap can also be reduced by boiling to produce a type of sugar used for cooking.Coconut palms produce as many as seventy fruits per year, weighing more than a kilogram each. The wall of the fruit has three layers: a waterproof outer layer, a fibrous middle layer and a hard, inner layer. The thick fibrous middle layer produces coconut fibre, ‘coir’, which has numerous uses and is particularly important in manufacturing ropes. The woody innermost layer, the shell, with its three prominent ‘eyes’, surrounds the seed. An important product obtained from the shell is charcoal, which is widely used in various industries as well as in the home as a cooking fuel. When broken in half, the shells are also used as bowls in many parts of Asia.Inside the shell are the nutrients (endosperm) needed by the developing seed. Initially, the endosperm is a sweetish liquid, coconut water, which is enjoyed as a drink, but also provides the hormones which encourage other plants to grow more rapidly and produce higher yields. As the fruit matures, the coconut water gradually solidifies to form the brilliant white, fat-rich, edible flesh or meat. Dried coconut flesh, ‘copra’, is made into coconut oil and coconut milk, which are widely used in cooking in different parts of the world, as well as in cosmetics. A derivative of coconut fat, glycerine, acquired strategic importance in a quite different sphere, as Alfred Nobel introduced the world to his nitroglycerine-based invention: dynamite.Their biology would appear to make coconuts the great maritime voyagers and coastal colonizers of the plant world. The large, energy-rich fruits are able to float in water and tolerate salt, but cannot remain viable indefinitely; studies suggest after about 110 days at sea they are no longer able to germinate. Literally cast onto desert island shores, with little more than sand to grow in and exposed to the full glare of the tropical sun, coconut seeds are able to germinate and root. The air pocket in the seed, created as the endosperm solidifies, protects the embryo. In addition, the fibrous fruit wall that helped it to float during the voyage stores moisture that can be taken up by the roots of the coconut seedling as it starts to grow.There have been centuries of academic debate over the origins of the coconut. There were no coconut palms in West Africa, the Caribbean or the east coast of the Americas before the voyages of the European explorers Vasco da Gama and Columbus in the late 15th and early 16th centuries. 16th century trade and human migration patterns reveal that Arab traders and European sailors are likely to have moved coconuts from South and Southeast Asia to Africa and then across the Atlantic to the east coast of America. But the origin of coconuts discovered along the west coast of America by 16th century sailors has been the subject of centuries of discussion. Two diametrically opposed origins have been proposed: that they came from Asia, or that they were native to America. Both suggestions have problems. In Asia, there is a large degree of coconut diversity and evidence of millennia of human use – but there are no relatives growing in the wild. In America, there are close coconut relatives, but no evidence that coconuts are indigenous. These problems have led to the intriguing suggestion that coconuts originated on coral islands in the Pacific and were dispersed from there.



  Japanese firms have achieved the highest levels of manufacturing efficiency in the world automobile   industry. Some observers of Japan have assumed that Japanese firms use the same manufacturing equipment and techniques as United States firms but have benefited from the unique characteristics of Japanese   employees and the Japanese culture. However, if this were true, then one would expect Japanese auto plants  in the United States to perform no better than factories run by United States companies. This is not the case, Japanese-run automobile plants located in the United   States and staffed by local workers have demonstrated   higher levels of productivity when compared with factories owned by United States companies. Other observers link high Japanese productivity to   higher levels of capital investment per worker. But a   historical perspective leads to a different conclusion. When the two top Japanese automobile makers matched and then doubled United States productivity levels in the mid-sixties, capital investment per employee was comparable to that of United States firms. Furthermore, by the late seventies, the amount of    fixed assets required to produce one vehicle was    roughly equivalent in Japan and in the United States. Since capital investment was not higher in Japan, it had    to be other factors that led to higher productivity. A more fruitful explanation may lie with Japanese    production techniques. Japanese automobile producers did not simply implement conventional processes more effectively: they made critical changes in United States procedures. For instance, the mass-production philosophy of United States automakers encouraged the production of huge lots of cars in order to utilize fully expensive, component-specific equipment and to  occupy fully workers who have been trained to execute    one operation efficiently. Japanese automakers chose to make small-lot production feasible by introducing several departures from United States practices,    including the use of flexible equipment that could be altered easily to do several different production tasks and the training of workers in multiple jobs. Automakers could schedule the production of different components or models on single machines, thereby eliminating the need to store the buffer stocks of extra components that result when specialized equipment and workers are kept constantly active.



Children learn to construct language from those around them. Until about the age of three, children tend to learn to develop their language by modeling the speech of their parents, but from that time on, peers have a growing influence as models for language development in children. It is easy to observe that, when adults and older children interact with younger children, they tend to modify their language to improve communication with younger children, and this modified language is called caretaker speech. Caretaker speech is used often quite unconsciously; few people actually study how to modify language when speaking to young children but, instead, without thinking, find ways to reduce the complexity of language in order to communicate effectively with young children. A caretaker will unconsciously speak in one way with adults and in a very different way with young children. Caretaker speech tends to be slower speech with short, simple words and sentences which are said in a higher-pitched voice with exaggerated inflections and many repetitions of essential information. It is not limited to what is commonly called baby talk, which generally refers to the use of simplified, repeated syllable expressions such as ma-ma, boo-boo, bye-bye, wa-wa, but also includes the simplified sentence structures repeated in sing-song inflections. Caretaker speech serves the very important function of allowing young children to acquire language more easily. The higher-pitched voice and the exaggerated inflections tend to focus the small child on what the caretaker is saying, the simplified words and sentences make it easier for the small child to begin to comprehend, and the repetitions reinforce the child's developing understanding. Then, as a child's speech develops, caretakers tend to adjust their language in response to the improved language skills, again quite unconsciously. Parents and older children regularly adjust their speech to a level that is slightly above that of a younger child; without studied recognition of what they are doing, these caretakers will speak in one way to a one-year-old and in a progressively more complex way as the child reaches the age of two or three. An important point to note is that the function covered by caretaker speech, that of assisting a child to acquire language in small and simple steps, is an unconsciously used but extremely important part of the process of language acquisition and as such is quite universal. Studying cultures where children do not acquire language through caretaker speech is difficult because such cultures are difficult to find. The question of why caretaker speech is universal is not clearly understood; instead proponents on either side of the nature vs. nurture debate argue over whether caretaker speech is a natural function or a learned one. Those who believe that caretaker speech is a natural and inherent function in humans believe that it is human nature for children to acquire language and for those around them to encourage their language acquisition naturally; the presence of a child is itself a natural stimulus that increases the rate of caretaker speech among those present. In contrast, those who believe that caretaker speech develops through nurturing rather than nature argue that a person who is attempting to communicate with a child will learn by trying out different ways of communicating to determine which is the most effective from the reactions to the communication attempts; a parent might, for example, learn to use speech with exaggerated inflections with a small child because the exaggerated inflections do a better job of attracting the child's attention than do more subtle inflections. Whether caretaker speech results from nature or nurture, it does play an important and universal role in chid language acquisition.



Coral colonies require a series of complicated events and circumstances to develop into the characteristically intricate reef structures for which they are known. These events and circumstances involve physical and chemical processes as well as delicate interactions among various animals and plants for coral colonies to thrive. The basic element in the development of coralline reef structures is a group of animals from the Anthozoa class, called stony corals, that is closely related to jellyfish and sea anemones. These small polyps (the individual animals that make. up the coral reef), which are for the most part only a fraction of an inch in length, live in colonies made up of an immeasurable number of polyps clustered together. Each individual polyp obtains calcium from the seawater where it lives to create a skeleton around the lower part of its body, and the polyps attach themselves both to the living tissue and to the external skeletons of other polyps. Many polyps tend to retreat inside of their skeletons during hours of daylight and then stretch partially outside of their skeletons during hours of darkness to feed on minute plankton from the water around them. The mouth at the top of each body is surrounded by rings of tentacles used to grab onto food, and these rings of tentacles make the polyps look like flowers with rings of clustered petals; because of this, biologists for years thought that corals were plants rather than animals. Once these coralline structures are established, they reproduce very quickly. They build in upward and outward directions to create a fringe of living coral surrounding the skeletal remnants of once-living coral. That coralline structures are commonplace in tropical waters around the world is due to the fact that they reproduce so quickly rather than the fact that they are hardy life-forms easily able to withstand external forces of nature. They cannot survive in water that is too dirty, and they need water that is at least 72° F (or 22° C) to exist, so they are formed only in waters ranging from 30° north to 30° south of the equator. They need a significant amount of sunlight, so they live only within an area between the surface of the ocean and a few meters beneath it. In addition, they require specific types of microscopic algae for their existence, and their skeletal shells are delicate in nature and are easily damaged or fragmented. They are also prey to other sea animals such as sponges and clams that bore into their skeletal structures and weaken them.           Coral colonies cannot build reef structures without considerable assistance. The many openings in and among the skeletons must be filled in and cemented together by material from around the colonies. The filling material often consists of fine sediments created either from the borings and waste of other animals around the coral or from the skeletons, shells, and remnants of dead plants and animals. The material that is used to cement the coral reefs comes from algae and other microscopic forms of seaweed. An additional part of the process of reef formation is the ongoing compaction and cementation that occurs throughout the process. Because of the soluble and delicate nature of the material from which coral is created, the relatively unstable crystals of corals and shells break down over time and are then rearranged as a more stable form of limestone. The coralline structures that are created through these complicated processes are extremely variable in form. They may, for example, be treelike and branching, or they may have more rounded and compact shapes. What they share in common, however, is the extraordinary variety of plant and animal life-forms that are a necessary part of the ongoing process of their formation.



America's passion for the automobile developed rather quickly in the beginning of the twentieth century. At the turn of that century, there were few automobiles, or horseless carriages, as they were called at the time, and those that existed were considered frivolous playthings of the rich. They were rather fragile machines that sputtered and smoked and broke down often; they were expensive toys that could not be counted on to get one where one needed to go; they could only be afforded by the wealthy class, who could afford both the expensive upkeep and the inherent delays that resulted from the use of a machine that tended to break down time and again. These early automobiles required repairs so frequently both because their engineering was at an immature stage and because roads were unpaved and often in poor condition. Then, when breakdowns occurred, there were no services such as roadside gas stations or tow trucks to assist drivers needing help in their predicament. Drivers of horse-drawn carriages considered the horseless mode of transportation foolhardy, preferring instead to rely on their four-legged "engines," which they considered a tremendously more dependable and cost-effective means of getting around. Automobiles in the beginning of the twentieth century were quite unlike today's models. Many of them were electric cars, even though the electric models had quite a limited range and needed to be recharged frequently at electric charging stations; many others were powered by steam, though it was often required that drivers of steam cars be certified steam engineers due to the dangers inherent in operating a steam-powered machine. The early automobiles also lacked much emphasis on body design; in fact, they were often little more than benches on wheels, though by the end of the first decade of the century they had progressed to leather-upholstered chairs or sofas on thin wheels that absorbed little of the incessant pounding associated with the movement of these machines. In spite of the rather rough and undeveloped nature of these early horseless carriages, something about them grabbed people's imagination, and their use increased rapidly, though not always smoothly. In the first decade of the last century, roads were shared by the horse-drawn and horseless variety of carriages, a situation that was rife with problems and required strict measures to control the incidents and accidents that resulted when two such different modes of transportation were used in close proximity. New York City, for example, banned horseless vehicles from Central Park early in the century because they had been involved in so many accidents, often causing injury or death; then, in 1904, New York state felt that it was necessary to control automobile traffic by placing speed limits of 20 miles per hour in open areas, 15 miles per hour in villages, and 1 0 miles per hour in cities or areas of congestion. However, the measures taken were less a means of limiting use of the automobile and more a way of controlling the effects of an invention whose use increased dramatically in a relatively short period of time. Under 5,000 automobiles were sold in the United States for a total cost of approximately $5 million in 1900, while considerably more cars, 181,000, were sold for $215 million in 1910, and by the middle of the 1920s, automobile manufacturing had become the top industry in the United States and accounted for 6 percent of the manufacturing in the country.



There is still much for astronomers to learn about pulsars. Based on what is known, the term pulsar is used to describe the phenomenon of short, precisely timed radio bursts that are emitted from somewhere in space. Though all is not known about pulsars, they are now believed in reality to emanate from spinning neutron stars, highly reduced cores of collapsed stars that are theorized to exist. Pulsars were discovered in 1967, when Jocelyn Bell, a graduate student at Cambridge University, noticed an unusual pattern on a chart from a radio telescope. What made this pattern unusual was that, unlike other radio signals from celestial objects, this series of pulses had a highly regular period of 1.33730119 seconds. Because day after day the pulses came from the same place among the stars, Cambridge researchers came to the conclusion that they could not have come from a local source such as an Earth satellite. A name was needed for this newly discovered phenomenon. The possibility that the signals were coming from a distant civilization was considered, and at that point the idea of naming the phenomenon L.G.M. (short for Little Green Men) was raised. However, after researchers had found three more regularly pulsing objects in other parts of the sky over the next few weeks, the name pulsar was selected instead of L.G.M. As more and more pulsars were found, astronomers engaged in debates over their nature. It was determined that a pulsar could not be a star inasmuch as a normal star is too big to pulse so fast. The question was also raised as to whether a pulsar might be a white dwarf star, a dying star that has collapsed to approximately the size of the Earth and is slowly cooling off. However, this idea was also rejected because the fastest pulsar known at the time pulsed around thirty times per second and a white dwarf, which is the smallest known type of star, would not hold together if it were to spin that fast. The final conclusion among astronomers was that only a neutron star, which is theorized to be the remaining core of a collapsed star that has been reduced to a highly dense radius of only around 10 kilometers, was small enough to be a pulsar. Further evidence of the link between pulsars and neutron stars was found in 1968, when a pulsar-was found in the middle of the Crab Nebula. The Crab Nebula is what remains of the supernova of the year 1054, and inasmuch as it has been theorized that neutron stars sometimes remain following supernova explosions, it is believed that the pulsar coming from the Crab Nebula is evidently just such a neutron star. The generally accepted theory for pulsars is the lighthouse theory, which is based upon a consideration of the theoretical properties of neutron stars and the observed properties of pulsars. According to the lighthouse theory, a spinning neutron star emits beams of radiation that sweep through the sky, and when one of the beams passes over the Earth, it is detectable on Earth. It is known as the lighthouse theory because the emissions from neutron stars are similar to the pulses of light emitted from lighthouses as they sweep over the ocean; the name lighthouse is therefore actually more appropriate than the name pulsar.



Schizophrenia is in reality a cluster of psychological disorders in which a variety of behaviors are exhibited and which are classified in various ways. Though there are numerous behaviors that might be considered schizophrenic, common behaviors that manifest themselves in severe schizophrenic disturbances are thought disorders, delusions, and emotional disorders. Because schizophrenia is not a single disease but is in reality a cluster of related disorders, schizophrenics tend to be classified into various subcategories. The various subcategories of schizophrenia are based on the degree to which the various common behaviors are manifested in the patient as well as other factors such as the age of the schizophrenic patient at the onset of symptoms and the duration of the symptoms. Five of the more common subcategories of schizophrenia are simple, hebephrenic, paranoid, catatonic, and acute. The main characteristic of simple schizophrenia is that it begins at a relatively early age and manifests itself in a slow withdrawal from family and social relationships with a gradual progression toward more severe symptoms over a period of years. Someone suffering from simple schizophrenia may early on simply be apathetic toward life, may maintain contact with reality a great deal of the time, and may be out in the world rather than hospitalized. Over time, however, the symptoms, particularly thought and emotional disorders, increase in severity. Hebephrenic schizophrenia is a relatively severe form of the disease that is characterized by severely disturbed thought processes as well as highly emotional and bizarre behavior. Those suffering from hebephrenic schizophrenia have hallucinations and delusions and appear quite incoherent; their behavior is often extreme and quite inappropriate to the situation, perhaps full of unwarranted laughter, or tears, or obscenities that seem unrelated to the moment. This type of schizophrenia represents a rather severe and ongoing disintegration of personality that makes this ·type of schizophrenic unable to play a role in society. Paranoid schizophrenia is a different type of schizophrenia in which the outward behavior of the schizophrenic often seems quite appropriate; this type of schizophrenic is often able to get along in society for long periods of time. However, a paranoid schizophrenic suffers from extreme delusions of persecution, often accompanied by delusions of grandeur. While this type of schizophrenic has strange delusions and unusual thought processes, his or her outward behavior is not as incoherent or unusual as a hebephrenic's behavior. A paranoid schizophrenic can appear alert and intelligent much of the time but can also turn suddenly hostile and violent in response to imagined threats. Another type of schizophrenia is the catatonic variety, which is characterized by alternating periods of extreme excitement and stupor. There are abrupt changes in behavior, from frenzied periods of excitement to stuporous periods of withdrawn behavior. During periods of excitement, the catatonic schizophrenic may exhibit excessive and sometimes violent behavior; during the periods of stupor, the catatonic schizophrenic may remain mute and unresponsive to the environment. A final type of schizophrenia is acute schizophrenia, which is characterized by a sudden onset of schizophrenic symptoms such as confusion, excitement, emotionality, depression, and irrational fear. The acute schizophrenic, unlike the simple schizophrenic, shows a sudden onset of the disease rather than a slow progression from one stage of it to the other. Additionally, the acute schizophrenic exhibits various types of schizophrenic behaviors during different episodes, sometimes exhibiting the characteristics of hebephrenic, catatonic, or even paranoid schizophrenia. In this type of schizophrenia, the patient 's personality seems to have completely disintegrated.



In a theoretical model of decision making, a decision is defined as the process of selecting one option from among a group of options for implementation. Decisions are formed by a decision maker, the one who actually chooses the final option, in conjunction with a decision unit, all of those in the organization around the decision maker who take part in the process. In this theoretical model, the members of the decision unit react to an unidentified problem by studying the problem, determining the objectives of the organization, formulating options, evaluating the strengths and weaknesses of each of the options, and reaching a conclusion. Many different factors can have an effect on the decision, including the nature of the problem itself, external forces exerting an influence on the organization, the internal dynamics of the decision unit, and the personality of the decision maker. During recent years, decision making has been studied systematically by drawing from such diverse areas of study as psychology, sociology, business, government, history, mathematics, and statistics. Analyses of decisions often emphasize one of three principal conceptual perspectives {though often the approach that is actually employed is somewhat eclectic). In the oldest of the three approaches, decisions are made by a rational actor, who makes a particular decision directly and purposefully in response to a specific threat from the external environment. It is assumed that this rational actor has clear objectives in mind, develops numerous reasonable options, considers the advantages and disadvantages of each option carefully, chooses the best option after careful analysis, and then proceeds to implement it fully. A variation of the rational actor model is a decision maker who is a satisfier, one who selects the first satisfactory option rather than continuing the decision­ making process until the optimal decision has been reached. A second perspective places an emphasis on the impact of routines on decisions within organizations. It demonstrates how organizational structures and routines such as standard operating procedures tend to limit the decision-making process in a variety of  ways, perhaps by restricting  the information available to the decision  unit, by restricting the breadth  of options among which the decision  unit may choose, or by inhibiting the ability  of the organization  to implement  the decision  quickly  and effectively  once it has been taken. Pre-planned routines and standard operating procedures are essential to coordinate the efforts of large numbers of people in massive organizations. However, these same routines and procedures can also have an inhibiting effect on the ability of the organization to arrive at optimal decisions and implement them efficiently. In this sort of decision-making process, organizations tend to take not the optimal decision but the decision that best fits within the permitted operating parameters outlined by the organization. A third conceptual perspective emphasizes the internal dynamics of the decision unit and the extent to which decisions are based on political forces within the organization. This perspective demonstrates how bargaining among individuals who have different interests and motives and varying levels of power in the decision unit leads to eventual compromise that is not the preferred choice of any of the members of the decision unit. Each of these three perspectives on the decision-making process demonstrates a different point of view on decision making, a different lens through which the decision­ making process can be observed. It is safe to say that decision making  in most organizations  shows marked influences  from each perspective; i.e., an organization strives to get as close as possible to the rational model in its decisions, yet the internal routines and dynamics of the organization come into play in the decision.



Millions of different species exist on the earth. These millions of species, which have evolved over billions of years, are the result of two distinct but simultaneously occurring processes: the processes of speciation and extinction. One of the processes that affects the number of species on earth is speciation, which results when one species diverges into two distinct species as a result of disparate natural selection in separate environments.  Geographic  isolation  is one common mechanism that fosters speciation; speciation  as a result of geographic isolation  occurs when two populations of a species become  separated  for long periods  of time into areas with different environmental  conditions. After the two populations are separated, they evolve independently; if this divergence continues long enough, members of the two distinct populations eventually become so different genetically that they are two distinct species rather than one. The process of speciation may occur within hundreds of years for organisms that reproduce rapidly, but for most species the process of speciation can take thousands to millions of years. One example of speciation is the early fox, which over time evolved into two distinct species, the gray fox and the arctic fox. The early fox separated into populations which evolved differently in response to very different environments as the populations moved in different directions, one to colder northern climates and the other to warmer southern climates. The northern population adapted to cold weather by developing heavier fur, shorter ears, noses, and legs, and white fur to camouflage itself in the snow. The southern population adapted to warmer weather by developing lighter fur and longer ears, noses, and legs and keeping its darker fur for better camouflage protection. Another of the processes that affects the number of species on earth is extinction, which refers to the situation in which a species ceases to exist. When environmental conditions change, a species needs to adapt to the new environmental conditions, or it may become extinct. Extinction of a species is not a rare occurrence but is instead a rather commonplace one: it has, in fact, been estimated that more than 99 percent of the.­ species that have ever existed have become extinct. Extinction may occur when a species fails to adapt to evolving environmental conditions in a limited area, a process known as background extinction. In contrast, a broader and more abrupt extinction, known as mass extinction, may come about as a result of a catastrophic event or global climatic change. When such a catastrophic event or global climatic change occurs, some species are able to adapt to the new environment, while those that are unable to adapt become extinct. From geological and fossil evidence, it appears that at least five great mass extinctions have occurred; the last mass extinction occurred approximately 65 million years ago, when the dinosaurs became extinct after 140 million years of existence on earth, marking the end of the Mesozoic Era and the beginning of the Cenozoic Era. The fact that millions of species are in existence today is evidence that speciation has clearly kept well ahead of extinction. In spite of the fact that there have been numerous periods of mass extinction, there is clear evidence that periods of mass extinction have been followed by periods of dramatic increases in new species to fill the void created by the mass extinctions, though it may take 10 million years or more following a mass extinction for biological diversity to be rebuilt through speciation. When the dinosaurs disappeared 65 million years ago, for example, the evolution and speciation of mammals increased spectacularly over the millions of years that ensued.



In the late 1980s, a disaster involving the Exxon Valdez, an oil tanker tasked with transporting oil from southern Alaska to the West Coast of the United States, caused a considerable amount of damage to the environment of Alaska. Crude oil from Alaska's North Slope fields near Prudhoe Bay on the north coast of Alaska is carried by pipeline to the port of Valdez on the southern coast and from there is shipped by tanker to the West Coast. On March 24, 1989, the Exxon Valdez, a huge oil tanker more than three football fields in length, went off course in a 16-kilometer-wide channel in Prince William Sound near Valdez, Alaska, hitting submerged rocks and causing a tremendous oil spill. The resulting oil slick spread rapidly and coated more than 1 ,600 kilometers (1 ,000 miles) of coastline. Though actual numbers can never be known, it is believed that at least a half million birds, thousands of seals and otters, quite a few whales, and an untold number of fish were killed as a result. Decades before this disaster, environmentalists had predicted just such an enormous oil spill in this area because of the treacherous nature of the waters due to the submerged reefs, icebergs, and violent storms there. They had urged that oil be transported to the continental United States by land-based pipeline rather than by oil tanker or by undersea pipeline to reduce the potential damage to the environment posed by the threat of an oil spill. Alyeska, a consortium of the seven oil companies working in Alaska's North Slope fields, argued against such a land-based pipeline on the basis of the length of time that such a pipeline would take to construct and on the belief, or perhaps wishful thinking, that the probability of a tanker spill in the area was extremely low. Government agencies charged with protecting the environment were assured by Alyeska and Exxon that such a pipeline was unnecessary because appropriate protective measures had been taken, that within five hours of any accident there would be enough equipment and trained workers to clean up any spill before it managed to cause much damage. However, when the Exxon Valdez spill actually occurred, Exxon and Alyeska were unprepared, in terms of both equipment and personnel, to deal with the spill. Though it was a massive spill, appropriate personnel and equipment available in a timely fashion could have reduced the damage considerably. Exxon ended up spending billions of dollars on the clean-up itself and, in addition, spent further billions in fines and damages to the state of Alaska, the federal government, commercial fishermen, property owners, and others harmed by the disaster. The total cost to Exxon was more than $8 billion. A step that could possibly have prevented this accident even though the tanker did run into submerged rocks would have been a double hull on the tanker. Today, almost all merchant ships have double hulls, but only a small percentage of oil tankers do. Legislation passed since the spill requires all new tankers to be built with double hulls, but many older tankers have received dispensations to avoid the $25 million cost per tanker to convert a single hulled tanker to one with a double hull. However, compared with the $8.5 billion cost of the Exxon Valdez catastrophe, it is a comparatively paltry sum.



   A considerable body of research has demonstrated a correlation between birth order and aspects such as temperament and behavior, and some psychologists believe that birth order significantly affects the development of personality. Psychologist Alfred Adler was a pioneer in the study of the relationship between birth order and personality. A key point in his research and in the hypothesis that he developed based on it was that it was not the actual numerical birth position that affected personality; instead, it was the similar responses in large numbers of families to children in specific birth order positions that had an effect. For example, first-borns, who have their parents to themselves initially and do not have to deal with siblings in the first part of their lives, tend to have their first socialization experiences with adults and therefore tend to find the process of peer socialization more difficult. In contrast, later-born children have to deal with siblings from the first moment of their lives and therefore tend to have stronger socialization skills. Numerous studies since Adler's have been conducted on the effect of birth order and personality. These studies have tended to classify birth order types into four different categories: first-born, second-born and/or middle, last, and only child. Studies have consistently shown that first-born children tend to exhibit similar positive and negative personality traits. First-borns have consistently been linked with academic achievement in various studies; in one study, the number of National Merit scholarship winners who are first-borns was found to be equal to the number of second- and third-borns combined. First-borns have been found to be more responsible and assertive than those born in other birth-order positions and tend to rise to positions of leadership more often than others; more First-borns have served in the U.S. Congress and as U.S. presidents than have those born in other birth-order positions. However, studies have shown that first-borns tend to be more subject to stress and were considered problem children more often than later-borns. Second-born and/or middle children demonstrate markedly different tendencies from first-borns. They tend to feel inferior to the older child or children because it is difficult for them to comprehend that their lower level of achievement is a function of age rather than ability, and they often try to succeed in areas other than those in which their older sibling or siblings excel. They tend to be more trusting, accepting, and focused on others than the more self-centered first-borns, and they tend to have a comparatively higher level of success in team sports than do first-borns or only children, who more often excel in individual sports. The last-born child is the one who tends to be the eternal baby of the family and thus often exhibits a strong sense of security. Last-borns collectively achieve the highest degree of social success and demonstrate the highest levels of self-esteem of all the birth-order positions. They often exhibit less competitiveness than older brothers and sisters and are more likely to take part in less competitive group games or in social organizations such as sororities and fraternities. Only children tend to exhibit some of the main characteristics of first-borns and some of the characteristics of last-borns. Only children tend to exhibit the strong sense of security and self-esteem exhibited by last-borns while, like first-borns, they are more achievement oriented and more likely than middle- or last-borns to achieve academic success. However, only children tend to have the most problems establishing close relationships and exhibit a lower need for affiliation than other children.


      






Aggressive behavior is any behavior that is intended to cause injury, pain, suffering, damage, or destruction. While aggressive behavior is often thought of as purely physical, verbal attacks such as screaming and shouting or belittling and humiliating comments aimed at causing harm and suffering can also be a type of aggression. What is key to the definition of aggression is that whenever harm is inflicted, be it physical or verbal, it is intentional. 

          Questions about the causes of aggression have long been of concern to both social and biological scientists. Theories about the causes of aggression cover a broad spectrum, ranging from those with biological or instinctive emphases to those that portray aggression as a learned behavior.

          Numerous theories are based on the idea that aggression is an inherent and natural human instinct. Aggression has been explained as an instinct that is directed externally toward others in a process called displacement, and it has been noted that aggressive impulses that are not channeled toward a specific person or group may be expressed indirectly through socially acceptable activities such as sports and competition in a process called catharsis. Biological, or instinctive, theories of aggression have also been put forth by ethologists, who study the behavior of animals in their natural environments. A number of ethologists have, based upon their observations of animals, supported the view that aggression is an innate instinct common to humans. 

          Two different schools of thought exist among those who view aggression as instinct. One group holds the view that aggression can build up spontaneously, with or without outside provocation, and violent behavior will thus result, perhaps as a result of little or no provocation. Another suggests that aggression is indeed an instinctive response but that, rather than occurring spontaneously and without provocation, it is a direct response to provocation from an outside source. 

         In contrast to instinct theories, social learning theories view aggression as a learned behavior. This approach focuses on the effect that role models and reinforcement of behavior have on the acquisition of aggressive behavior. Research has shown that aggressive behavior can be learned through a combination of modeling and positive reinforcement of the aggressive behavior and that children are influenced by the combined forces of observing aggressive behavior in parents, peers, or fictional role models and of noting either positive reinforcement for the aggressive behavior or, minimally, a lack of negative reinforcement for the behavior. While research has provided evidence that the behavior of a live model is more influential than that of a fictional model, fictional models of aggressive behavior such as those seen in movies and on television, do still have an impact on behavior. On-screen deaths or acts of violent behavior in certain television programs or movies can be counted in the tens, or hundreds, or even thousands; while some have argued that this sort of fictional violence does not in and of itself cause violence and may even have a beneficial cathartic effect, studies have shown correlations between viewing of violence and incidences of aggressive behavior in both childhood and adolescence. Studies have also shown that it is not just the modeling of aggressive behavior in either its real-life or fictional form that correlates with increased acts of violence in youths; a critical factor in increasing aggressive behaviors is the reinforcement of the behavior. If the aggressive role model is rewarded rather than punished for violent behavior, that behavior is more likely to be seen as positive and is thus more likely to be imitated.  







The fossil remains of the first flying vertebrates, the pterosaurs, have intrigued paleontologists for more than two centuries. How such large creatures, which   weighed in some cases as much as a piloted hang-glider  and had wingspans from 8 to 12 meters, solved the problems of powered flight, and exactly what these creatures were--reptiles or birds-are among the questions scientists have puzzled over. Perhaps the least controversial assertion about the  pterosaurs is that they were reptiles. Their skulls, pelvises, and hind feet are reptilian. The anatomy of their wings suggests that they did not evolve into the class of birds. In pterosaurs a greatly elongated fourth finger of each forelimb supported a wing like membrane. The other fingers were short and reptilian, with sharp claws. In birds the second finger is the principal strut of the wing, which consists primarily of feathers. If the pterosaurs walked on all fours, the three short fingers may have been employed for grasping. When a  pterosaur walked or remained stationary, the fourth finger, and with it the wing, could only turn upward in an extended inverted V-shape along each side of the animal’s body.    The pterosaurs resembled both birds and bats in  their overall structure and proportions. This is not surprising because the design of any flying vertebrate is subject to aerodynamic constraints. Both the pterosaurs and the birds have hollow bones, a feature that represents a savings in weight. In the birds, however, these bones are reinforced more massively by internal struts.   Although scales typically cover reptiles, the pterosaurs probably had hairy coats. T.H. Huxley reasoned that flying vertebrates must have been warm- blooded because flying implies a high rate of  metabolism, which in turn implies a high internal temperature. Huxley speculated that a coat of hair would insulate against loss of body heat and might streamline the body to reduce drag in flight. The recent discovery of a pterosaur specimen covered in long, dense, and  relatively thick hair like fossil material was the first clear evidence that his reasoning was correct. Efforts to explain how the pterosaurs became air- borne have led to suggestions that they launched them- selves by jumping from cliffs, by dropping from trees or even by rising into light winds from the crests of waves. Each hypothesis has its difficulties. The first wrongly assumes that the pterosaurs’ hind feet resembled a bat’s and could serve as hooks by which the animal could hang in preparation for flight. The second hypothesis seems unlikely because large pterosaurs could not have landed in trees without damaging their wings. The third calls for high waves to channel updrafts. The wind that made such waves however, might have been too strong for the pterosaurs to control their flight once airborne.


    


Literature is at once the most intimate and the most articulate of the arts. It cannot impart its effect through the senses or the nerves as the other arts can; it is beautiful only through the intelligence; it is the mind speaking to the mind; until it has been put into absolute terms, of an invariable significance, it does not exist at all. It cannot awaken this emotion in one, and that in another; if it fails to express precisely the meaning of the author, if it does not say ~him~, it says nothing, and is nothing. So that when a poet has put his heart, much or little, into a poem, and sold it to a magazine, the scandal is greater than when a painter has sold a picture to a patron, or a sculptor has modelled a statue to order. These are artists less articulate and less intimate than the poet; they are more exterior to their work; they are less personally in it; they part with less of themselves in the dicker. It does not change the nature of the case to say that Tennyson and Longfellow and Emerson sold the poems in which they couched the most mystical messages their genius was charged to bear mankind. They submitted to the conditions which none can escape; but that does not justify the conditions, which are none the less the conditions of hucksters because they are imposed upon poets. If it will serve to make my meaning a little clearer, we will suppose that a poet has been crossed in love, or has suffered some real sorrow, like the loss of a wife or child. He pours out his broken heart in verse that shall bring tears of sacred sympathy from his readers, and an editor pays him a hundred dollars for the right of bringing his verse to their notice. It is perfectly true that the poem was not written for these dollars, but it is perfectly true that it was sold for them. The poet must use his emotions to pay his provision bills; he has no other means; society does not propose to pay his bills for him. Yet, and at the end of the ends, the unsophisticated witness finds the transaction ridiculous, finds it repulsive, finds it shabby. Somehow he knows that if our huckstering civilization did not at every moment violate the eternal fitness of things, the poet's song would have been given to the world, and the poet would have been cared for by the whole human brotherhood, as any man should be who does the duty that every man owes it. The instinctive sense of the dishonor which money-purchase does to art is so strong that sometimes a man of letters who can pay his way otherwise refuses pay for his work, as Lord Byron did, for a while, from a noble pride, and as Count Tolstoy has tried to do, from a noble conscience. But Byron's publisher profited by a generosity which did not reach his readers; and the Countess Tolstoy collects the copyright which her husband foregoes; so that these two eminent instances of protest against business in literature may be said not to have shaken its money basis. I know of no others; but there maybe many that I am culpably ignorant of. Still, I doubt if there are enough to affect the fact that Literature is Business as well as Art, and almost as soon. At present business is the only human solidarity; we are all bound together with that chain, whatever interests and tastes and principles separate us.


No very satisfactory account of the mechanism that caused the formation of the ocean basins has yet been given. The traditional view supposes that the upper mantle of the earth behaves as a liquid when it is subjected to small forces for long periods and that differences in temperature under oceans and continents are sufficient to produce convection in the mantle of the earth with rising convection currents under the midocean ridges and sinking currents under the continents. Theoretically, this convection would carry the continental plates along as though they were on a conveyor belt and would provide the forces needed to produce the split that occurs along the ridge. This view may be correct: it has the advantage that the currents are driven by temperature differences that themselves depend on the position of the continents. Such a back-coupling, in which the position of the moving plate has an impact on the forces that move it, could produce complicated and varying motions. On the other hand, the theory is implausible because convection does not normally occur along lines. and it certainly does not occur along lines broken by frequent offsets or changes in direction, as the ridge is. Also it is difficult to see how the theory applies to the plate between the Mid-Atlantic Ridge and the ridge in the Indian Ocean. This plate is growing on both sides, and since there is no intermediate trench, the two ridges must be moving apart. It would be odd if the rising convection currents kept exact pace with them. An alternative theory is that the sinking part of the plate, which is denser than the hotter surrounding mantle, pulls the rest of the plate after it. Again it is difficult to see how this applies to the ridge in the South Atlantic, where neither the African nor the American plate has a sinking part.  Another possibility is that the sinking plate cools the neighboring mantle and produces convection currents that move the plates. This last theory is attractive because it gives some hope of explaining the enclosed seas, such as the Sea of Japan. These seas have a typical oceanic floor, except that the floor is overlaid by several kilo- meters of sediment. Their floors have probably been sinking for long periods. It seems possible that a sinking current of cooled mantle material on the upper side of the plate might be the cause of such deep basins. The enclosed seas are an important feature of the earth’s surface, and seriously require explanation in because, addition to the enclosed seas that are developing at present behind island arcs, there are a number of older ones of possibly similar origin, such as the Gulf of Mexico, the Black Sea, and perhaps the North Sea.



  A polytheist always has favorites among the gods, determined by his own temperament, age, and condition, as well as his own interest, temporary or permanent. If it is true that everybody loves a lover, then Venus will be a popular deity with all. But from lovers she will elicit special devotion. In ancient Rome, when a young couple went out together to see a procession or other show, they would of course pay great respect to Venus, when her image appeared on the screen. Instead of saying, "Isn't love wonderful?" they would say, "Great art thou, O Venus." In a polytheistic society you could tell a good deal about a person's frame of mind by the gods he favored, so that to tell a girl you were trying to woo that you thought Venus overrated was hardly the way to win her heart. But in any case, a lovesick youth or maiden would be spontaneously supplicating Venus. The Greeks liked to present their deities in human form; it was natural to them to symbolize the gods as human beings glorified, idealized. But this fact is also capable of misleading us. We might suppose that the ancients were really worshipping only themselves; that they were, like Narcissus, beholding their own image in a pool, so that their worship was ~anthropocentric~ (man-centered) rather than ~theocentric~ (god-centered). We are in danger of assuming that they were simply constructing the god in their own image. This is not necessarily so. The gods must always be symbolized in one form or another. To give them a human form is one way of doing this, technically called ~anthropomorphism~ (from the Greek ~anthropos~, a man, and ~morphé~,form). People of certain temperaments and within certain types of culture seem to be more inclined to it than are others. It is, however, more noticeable in others than in oneself, and those who affect to despise it are sometimes conspicuous for their addiction to it. A German once said an Englishman's idea of God is an Englishman twelve feet tall. Such disparagement of anthropomorphism occurred in the ancient world, too. The Celts, for instance, despised Greek practice in this matter, preferring to use animals and other such symbols. The Egyptians favored more abstract and stylized symbols, among which a well-known example is the solar disk, a symbol of Rà, the sun-god. Professor C. S. Lewis tells of an Oxford undergraduate he knew who, priggishly despising the conventional images of God, thought he was overcoming anthropomorphism by thinking of the Deity as infinite vapor or smoke. Of course even the bearded-old-man image can be a better symbol of Deity than ever could be the image, even if this were psychologically possible, of an unlimited smog.     What is really characteristic of all polytheism, however, is not the worship of idols or humanity or forests or stars; it is, rather, the worship of innumerable ~powers~ that confront and affect us. The powers are held to be valuable in themselves; that is why they are to be worshipped. But the values conflict. The gods do not cooperate, so you have to play them off against each other. Suppose you want rain. You know of two gods, the dry-god who sends drought and the wet-god who sends rain. You do not suppose that you can just pray to the wet-god to get busy, and simply ignore the dry-god. If you do so, the latter may be offended, so that no matter how hard the wet-god tries to oblige you, the dry-god will do his best to wither everything. Because both gods are powerful you must take both into consideration, begging the wet-god to be generous and beseeching the dry-god to stay his hand.

   


A newly issued report reveals in facts and figures what should have been known in principle, that quite a lot of business companies are going to go under during the coming decade, as tariff walls are progressively dismantled. Labor and capital valued at $12 billion are to be made idle through the impact of duty-free imports. As a result, 35,000 workers will be displaced. Some will move to other jobs and other departments within the same firm. Around 15,000 will have to leave the firm now employing them and work elsewhere. The report is measuring exclusively the influence of free trade with Europe. The authors do not take into account the expected expansion of production over the coming years. On the other hand, they are not sure that even the export predictions they make will be achieved. For this presupposes that a suitable business climate lets the pressure to increase productivity materialize. There are two reasons why this scenario may not happen. The first one is that industry on the whole is not taking the initiatives necessary to adapt fully to the new price situation it will be facing as time goes by. This is another way of saying that the manufacturers do not realize what lies ahead. The government is to blame for not making the position absolutely clear. It should be saying that in ten years' time tariffs on all industrial goods imported from Europe will be eliminated. There will be no adjustment assistance for manufacturers who cannot adapt to this situation. The second obstacle to adjustment is not stressed in the same way in the report; it is the attitude of the service sector. Not only are service industries unaware that the Common Market treaty concerns them too, they are artificially insulated from the physical pressures of international competition. The manufacturing sector has been forced to apply its nose to the grindstone for some time now, by the increasingly stringent import-liberalization program. The ancillary services on which the factories depend show a growing indifference to their work obligations. They seem unaware that overmanned ships, underutilized container equipment in the ports, and repeated work stoppages slow the country's attempts to narrow the trade gap. The remedy is to cut the fees charged by these services so as to reduce their earnings min exactly the same way that earnings in industrial undertakings are reduced by the tariff reduction program embodied in the treaty with the European Community. There is no point in dismissing 15,000 industrial workers from their present jobs during the coming ten years if all the gain in productivity is wasted by costly harbor, transport, financial, administrative and other services. The free trade treaty is their concern as well. Surplus staff should be removed, if need be, from all workplaces, not just from the factories. Efficiency is everybody's business.



     The fundamental objectives of sociology are the same as those of science generally discovery and explanation. To ~discover~ the essential data of social behavior and the connections among the data is the first objective of sociology. To ~explain~ the data and the connections is the second and larger objective. Science makes its advances in terms of both of these objectives. Sometimes it is the discovery of a new element or set of elements that marks a major breakthrough in the history of a scientific discipline. Closely related to such discovery is the discovery of relationships of data that had never been noted before. All of this is, as we know, of immense importance in science. But the drama of discovery, in this sense, can sometimes lead us to overlook the greater importance of explanation of what is revealed by the data. Sometimes decades, even centuries, pass before known connections and relationships are actually explained. Discovery and explanation are the two great interpenetrating, interacting realms of science. The order of reality that interests the scientists is the ~empirical~ order, that is, the order of data and phenomena revealed to us through observation or experience. To be precise or explicit about what is, and is not, revealed by observation is not always easy, to be sure. And often it is necessary for our natural powers of observation to be supplemented by the most intricate of mechanical aids for a given object to become "empirical" in the sense just used. That the electron is not as immediately visible as is the mountain range does not mean, obviously, that it is any less empirical. That social behavior does not lend itself to as quick and accurate description as, say, chemical behavior of gases and compounds does not mean that social roles, statuses, and attitudes are any less empirical than molecules and tissues. What is empirical and observable today may have been nonexistent in scientific consciousness a decade ago. Moreover, the empirical is often data ~inferred~ from direct observation. All of this is clear enough, and we should make no pretense that there are not often shadow areas between the empirical and the non empirical. Nevertheless, the first point to make about any science, physical or social, is that its world of data is the empirical world. A very large amount of scientific energy goes merely into the work of expanding the frontiers, through discovery, of the known, observable, empirical world.     From observation or discovery we move to ~explanation~. The explanation sought by the scientist is, of course, not at all like the explanation sought by the theologian or metaphysician. The scientist is not interested mnot, that is, in his role of scientist min ultimate, transcendental, or divine causes of what he sets himself to explain. He is interested in explanations that are as empirical as the data themselves. If it is the high incidence of crime in a certain part of a large city that requires explanation, the scientist is obliged to offer his explanation in terms of factors which are empirically real as the phenomenon of crime itself. He does not explain the problem, for example, in terms of references to the will of God, demons, or original sin. A satisfactory explanation is not only one that is empirical, however, but one that can be stated in the terms of a ~causal proposition~. Description is an indispensable point of beginning, but description is not explanation. lt is well to stress this point, for there are all too many scientists, or would-be scientists, who are primarily concerned with data gathering, data counting, and data describing, and who seem to forget that such operations, however useful, are but the first step. Until we have accounted for the problem at hand, explain edit causally by referring the data to some principle or generalization already established, or to some new principle or generalization, we have not explained anything.





  In ~Scholasticism and Politics~, written during World War II, Maritain expressed discouragement at the pessimism and lack of self-confidence characteristic of the Western democracies, and in the postwar world he joined enthusiastically in the resurgence of that confidence. While stopping short of asserting that democracy as a political system flowed directly from correct philosophical principles, he nonetheless dismissed Fascism and Communism as inherently irrational. Bourgeois individualism was, however, implicitly immoral and, by breaking down all sense of community and shared moral values, would inevitably end in some form of statism: order imposed from above. In ~Integral Humanism~ (1936) and later works, he developed a systematic critique of the prevailing modern political ideologies and argued that a workable political order, which might appropriately be democracy, depended on a correct understanding of human nature and of natural moral law. Maritain became something of an Americanophile, seeking to counter not only what he regarded as European misconceptions about America but also the Americans' own self-deprecation. In ~Reflections on America~ (1958), he argued that Americans were not really materialistic but were the most idealistic people in the world, although theirs was an idealism often unformed and lacking in philosophical bases. America, he thought, offered perhaps the best contemporary prospect for the emergence of a truly Christian civilization, based not on governmental decree but on the gradual realization of Christian values on the part of a majority of the population. American saints were coming, he predicted.     But his postulation of a possible Christian civilization in America did not in any way temper his optimistic political liberalism|ma facet of his thought which caused him to be held in suspicion by some of his fellow Catholics in the 1950s. The Dominican chaplain at Princeton, for example, refused to allow him to address the Catholic students. (One of the exquisite ironies of recent Catholic history was that Maritain in his last books was acerbically critical of secularizing priests, while the Dominican chaplain resigned from the priesthood and ended his days as a real estate salesman in Florida.)No doubt in part because of Raïssa's background, Maritain had an enduring interest in anti-Semitism, which he analyzed and criticized in two books, and he was one of the principal influences in the effort to establish better Jewish-Catholic relations. Racism he regarded as America's most severe flaw. As early as 1958 he was praising Martin Luther King, Jr., and the Chicago neighborhood organizer Saul Alinsky.     Maritain and, to a lesser extent, Gilson provided the program for a bold kind of Catholic intellectuality|man appropriation of medieval thought for modern use, not so much a medieval revival as a demonstration of the perennial relevance of the medieval philosophical achievement. The modern mind was to be brought back to its Catholic roots, not by the simple disparagement of modernity or by emphasis on the subjective necessity of faith, but by a rigorous and demanding appeal to reason. In the process, Scholastic principles would be applied in new and often daring ways.     In the end the gamble failed. Despite promising signs in the 1940s, secular thinkers did not finally find the Scholastic appeal persuasive. And, as is inevitable when an intellectual community is dominated so thoroughly by a single system of thought, a restiveness was building up in Catholic circles. Although Maritain insisted that Thomism, because of the central importance it gave to the act of existence, was the true existentialism, Catholic intellectuals of the 1950s were attracted to the movement which more usually went by that name; and Gabriel Marcel, a Catholic existentialist of the same generation as Gilson and Maritain, was available to mediate between faith and anguish. Catholic colleges in America were hospitable to existentialist and phenomenological currents at a time when few secular institutions were, and what Catholics sought there was primarily a philosophy which was serious about the metaphysical questions of existence, yet not as rationalistic, rigid, and abstract as Scholasticism often seemed to be.



  The economic condition of the low-income regions of the world is one of the great problems of our time. Their progress is important to the high-income countries, not only for humanitarian and political reasons but also because rapid economic growth in the low income countries could make a substantial contribution to the expansion and prosperity of the world economy as a whole. The governments of most high-income countries have in recent years undertaken important aid programs, both bilaterally and multilaterally, and have thus demonstrated their interest in the development of low-income countries. They have also worked within the General Agreement on Tariffs and Trade (GATT) for greater freedom of trade and, recognizing the special problems of low-income countries, have made special trading arrangements to meet their needs. But a faster expansion of trade with high-income countries is necessary if the low-income countries are to enjoy a satisfactory rate of growth. This statement is therefore concerned with the policies of high-income countries toward their trade with low-income countries. Our recommendations are based on the conviction that a better distribution of world resources and a more rational utilization of labor are in the general interest. A liberal policy on the part of high-income countries with respect to their trade with low-income countries will not only be helpful to the low-income countries but, when transitional adjustments have taken place, beneficial to the high-income countries as well. It is necessary to recognize however, that in furthering the development of low-income countries, the high-income countries can play only a supporting role. If development is to be successful, the main effort must necessarily be made by the people of the low-income countries. The high-income countries are, moreover, likely to provide aid and facilitate trade more readily and extensively where the low-income countries are seen to be making sound and determined efforts to help themselves, and thus to be making effective use of their aid and trade opportunities.     It is, then, necessary that the low-income countries take full account of the lessons that have been learned from the experience of recent years, if they wish to achieve successful development and benefit from support from high-income countries. Among the most important of these lessons are the following:     Severe damage has been done by inflation. A sound financial framework evokes higher domestic savings and investment as well as more aid and investment from abroad. Budgetary and monetary discipline and a more efficient financial and fiscal system help greatly to mobilize funds for investment and thereby decisively influence the rate of growth. Foreign aid should also be efficiently applied to this end. The energies of the people of low-income countries are more likely to be harnessed to the task of economic development where the policies of their governments aim to offer economic opportunity for all and to reduce excessive social inequalities.     Development plans have tended to concentrate on industrial investment. The growth of industry depends, however, on concomitant development in agriculture. A steady rise in productivity on the farms, where in almost all low-income countries a majority of the labor force works, is an essential condition of rapid over-all growth. Satisfactory development of agriculture is also necessary to provide an adequate market for an expanding industrial sector and to feed the growing urban population without burdening the balance of payments with heavy food imports. Diminishing surpluses in the high-income countries underline the need for a faster growth of agricultural productivity in low-income countries. Success in this should, moreover, lead to greater trade in agricultural products among the low-income countries themselves as well as to increased exports of some agricultural products to the high-income countries.     There can be no doubt about the urgency of the world food problem. Adequate nourishment and a balanced diet are not only necessary for working adults but are crucial for the mental and physical development of growing children. Yet, in a number of low-income countries where the diet is already insufficient the production of food has fallen behind the increase in population. A continuation of this trend must lead to endemic famine. The situation demands strenuous efforts in the low-income countries to improve the production, preservation, and distribution of food so that these countries are better able to feed themselves.



    It is indisputable that in order to fulfill its many functions, water should be clean and biologically valuable. The costs connected with the provision of biologically valuable water for food production with the maintenance of sufficiently clean water, therefore, are primarily production costs. Purely "environmental" costs seem to be in this respect only costs connected with the safeguarding of cultural, recreational and sports functions which the water courses and reservoirs fulfill both in nature and inhuman settlements. The pollution problems of the atmosphere resemble those of the water only partly. So far, the supply of air has not been deficient as was the case with water, and the dimensions of the air-shed are so vast that a number of people still hold the opinion that air need not be economized. However, scientific forecasts have shown that the time may be already approaching when clear and biologically valuable air will become problem No. 1. Air being ubiquitous, people are particularly sensitive about any reduction in the quality of the atmosphere, the increased contents of dust and gaseous exhalations, and particularly about the presence of odors. The demand for purity of atmosphere, therefore, emanates much more from the population itself than from the specific sectors of the national economy affected by a polluted or even biologically aggressive atmosphere. The households' share in atmospheric pollution is far bigger than that of industry which, in turn, further complicates the economic problems of atmospheric purity. Some countries have already collected positive experience with the reconstruction of whole urban sectors on the basis of new heating appliances based on the combustion of solid fossil fuels; estimates of the economic consequences of such measures have also been put forward.     In contrast to water, where the maintenance of purity would seem primarily to be related to the costs of production and transport, a far higher proportion of the costs of maintaining the purity of the atmosphere derives from environmental considerations. Industrial sources of gaseous and dust emissions are well known and classified; their location can be accurately identified, which makes them controllable. With the exception, perhaps, of the elimination of sulphur dioxide, technical means and technological processes exist which can be used for the elimination of all excessive impurities of the air from the various emissions. Atmospheric pollution caused by the private property of individuals (their dwellings, automobiles, etc.) is difficult to control. Some sources such as motor vehicles are very mobile, and they are thus capable of polluting vast territories. In this particular case, the cost of anti-pollution measures will have to be borne, to a considerable extent, by individuals, whether in the form of direct costs or indirectly in the form of taxes, dues, surcharges, etc. The problem of noise is a typical example of an environmental problem which cannot be solved only passively, i.e., merely by protective measures, but will require the adoption of active measures, i.e., direct interventions at the source. The costs of a complete protection against noise are so prohibitive as to make it unthinkable even in the economically most developed countries. At the same time it would not seem feasible, either economically or politically, to force the population to carry the costs of individual protection against noise, for example, by reinforcing the sound insulation of their homes. A solution of this problem probably cannot be found in the near future.



   With Friedrich Engels, Karl Marx in 1848 published the ~Communist Manifesto~, calling upon the masses to rise and throw off their economic chains. His maturer theories of society were later elaborated in his large and abstruse work ~Das Capital~. Starting as a non-violent revolutionist, he ended life as a major social theorist more or less sympathetic with violent revolution, if such became necessary in order to change the social system which he believed to be frankly predatory upon the masses. On the theoretical side, Marx set up the doctrine of surplus value as the chief element in capitalistic exploitation. According to this theory, the ruling classes no longer employed military force primarily as a means to plundering the people. Instead, they used their control over employment and working conditions under the bourgeois capitalistic system for this purpose, paying only a bare subsistence wage to the worker while they appropriated all surplus values in the productive process. He further taught that the strategic disadvantage of the worker in industry prevented him from obtaining a fairer share of the earnings by bargaining methods and drove him to revolutionary procedures as a means to establishing his economic and social rights. This revolution might be peacefully consummated by parliamentary procedures if the people prepared themselves for political action by mastering the materialistic interpretation of history and by organizing politically for the final event. It was his belief that the aggressions of the capitalist class would eventually destroy the middle class and take over all their sources of income by a process of capitalistic absorption of industry|ma process which has failed to occur in most countries. With minor exceptions, Marx's social philosophy is now generally accepted by leftwing labor movements in many countries, but rejected by centrist labor groups, especially those in the United States. In Russia and other Eastern European countries, however, Socialist leaders adopted the methods of violent revolution because of the opposition of the ruling classes. Yet, many now hold that the present Communist regime in Russia and her satellite countries is no longer a proletarian movement based on Marxist social and political theory, but a camouflaged imperialistic effort to dominate the world in the interest of a new ruling class. It is important, however, that those who wish to approach Marx as a teacher should not be "buffaloed'' by his philosophic approach. They are very likely to in these days, because those most interested in propagating the ideas of Marx, the Russian Bolsheviks, have swallowed down his Hegelian philosophy along with his science of revolutionary engineering, and they look upon us irreverent peoples who presume to meditate social and even revolutionary problems without making our obeisance to the mysteries of Dialectic Materialism, as a species of unredeemed and well-nigh unredeemable barbarians. They are right in scorning our ignorance of the scientific ideas of Karl Marx and our indifference to them. They are wrong in scorning our distaste for having practical programs presented in the form of systems of philosophy. In that we simply represent a more progressive intellectual culture than that in which Marx received his education|ma culture farther emerged from the dominance of religious attitudes.



     The first and decisive step in the expansion of Europe overseas was the conquest of the Atlantic Ocean. That the nation to achieve this should be Portugal was the logical outcome of her geographical position and her history. Placed on the extreme margin of the old, classical Mediterranean world and facing the untraversed ocean, Portugal could adapt and develop the knowledge and experience of the past to meet the challenge of the unknown. Some centuries of navigating the coastal waters of Western Europe and Northern Africa had prepared Portuguese seamen to appreciate the problems which the Ocean presented and to apply and develop the methods necessary to overcome them. From the seamen of the Mediterranean, particularly those of Genoa and Venice, they had learned the organization and conduct of a mercantile marine, and from Jewish astronomers and Catalan mapmakers the rudiments of navigation. Largely excluded from a share in Mediterranean commerce at a time when her increasing and vigorous population was making heavy demands on her resources, Portugal turned southwards and westwards for opportunities of trade and commerce. At this moment of national destiny it was fortunate for her that in men of the calibre of Prince Henry, known as the Navigator, and King John II she found resolute and dedicated leaders. The problems to be faced were new and complex. The conditions for navigation and commerce in the Mediterranean were relatively simple, compared with those in the western seas. The landlocked Mediterranean, tideless and with a climatic regime of regular and well-defined seasons, presented few obstacles to sailors who were the heirs of a great body of sea lore garnered from the experiences of many centuries. What hazards there were, in the form of sudden storms or dangerous coasts, were known and could be usually anticipated. Similarly the Mediterranean coasts, though they might be for long periods in the hands of dangerous rivals, were described in sailing directions or laid down on the portolan charts drawn by Venetian, Genoese and Catalan cartographers. Problems of determining positions at sea, which confronted the Portuguese, did not arise. Though the Mediterranean seamen by no means restricted themselves to coastal sailing, the latitudinal extent of the Mediterranean was not great, and voyages could be conducted from point to point on compass bearings; the ships were never so far from land as to make it necessary to fix their positions in latitude by astronomical observations. Having made a landfall on a bearing, they could determine their precise position from prominent landmarks, soundings or the nature of the sea bed, after reference to the sailing directions or charts.     By contrast, the pioneers of ocean navigation faced much greater difficulties. The western ocean which extended, according to the speculations of the cosmographers, through many degrees of latitude and longitude, was an unknown quantity, but certainly subjected to wide variations of weather and without known bounds. Those who first ventured out over its waters did so without benefit of sailing directions or traditional lore. As the Portuguese sailed southwards, they left behind them the familiar constellations in the heavens by which they could determine direction and the hours of the night, and particularly the pole-star from which by a simple operation they could determine their latitude. Along the unknown coasts they were threatened by shallows, hidden banks, rocks and contrary winds and currents, with no knowledge of convenient shelter to ride out storms or of very necessary watering places. It is little wonder that these pioneers dreaded the thought of being forced on to a lee shore or of having to choose between these inshore dangers and the unrecorded perils of the open sea.



     In the past, American colleges and universities were created to serve a dual purpose|m to advance learning and to offer a chance to become familiar with bodies of knowledge already discovered to those who wished it. To create and to impart, these were the hallmarks of American higher education prior to the most recent, tumultuous decades of the twentieth century. The successful institution of higher learning had never been one whose mission could be defined in terms of providing vocational skills or as a strategy for resolving societal problems. In a subtle way Americans believed postsecondary education to be useful, but not necessarily of immediate use. What the student obtained in college became beneficial in later life|m residually, without direct application in the period after graduation.     Another purpose has now been assigned to the mission of American colleges and universities. Institutions of higher learning|m public or private|m commonly face the challenge of defining their programs in such a way as to contribute to the service of the community.     This service role has various applications. Most common are programs to meet the demands of regional employment markets, to provide opportunities for upward social and economic mobility, to achieve racial, ethnic, or social integration, or more generally to produce "productive" as compared to "educated" graduates. Regardless of its precise definition, the idea of a service-university has won acceptance within the academic community.     One need only be reminded of the change in language describing the two-year college to appreciate the new value currently being attached to the concept of a service-related university. The traditional two-year college has shed its pejorative "junior" college label and is generally called a "community" college, a clearly value-laden expression representing the latest commitment in higher education. Even the doctoral degree, long recognized as a required "union card" in the academic world, has come under severe criticism as the pursuit of learning for its own sake and the accumulation of knowledge without immediate application to a professor's classroom duties. The idea of a college or university that performs a triple function|m communicating knowledge to students, expanding the content of various disciplines, and interacting in a direct relationship with society|m has been the most important change in higher education in recent years.     This novel development is often overlooked. Educators have always been familiar with those parts of the two-year college curriculum that have a "service" or vocational orientation. Knowing this, otherwise perceptive commentaries on American postsecondary education underplay the impact of the attempt of colleges and universities to relate to, if not resolve, the problems of society. Whether the subject under review is student unrest, faculty tenure, the nature of the curriculum, the onset of collective bargaining, or the growth of collegiate bureaucracies, in each instance the thrust of these discussions obscures the larger meaning of the emergence of the service-university in American higher education. Even the highly regarded critique of Clark Kerr, currently head of the Carnegie Foundation, which set the parameters of academic debate around the evolution of the so-called "multiversity," failed to take account of this phenomenon and the manner in which its fulfillment changed the scope of higher education. To the extent that the idea of "multiversity" centered on matters of scale|mhow big is too big? how complex is too complex?|mit obscured the fundamental question posed by the service-university: what is higher education supposed to do? Unless the commitment to what Samuel Gould has properly called the "communiversity" is clearly articulated, the success of any college or university in achieving its service-education functions will be effectively impaired. . . .     The most reliable report about the progress of Open Admissions became available at the end of August, 1974. What the document showed was that the dropout rate for all freshmen admitted in September, 1970, after seven semesters, was about48 percent, a figure that corresponds closely to national averages at similar colleges and universities. The discrepancy between the performance of "regular" students (those who would have been admitted into the four-year colleges with 80% high school averages and into the two-year units with 75%) and Open Admissions freshmen provides a better indication of how the program worked. Taken together the attrition rate (from known and unknown causes) was 48 percent, but the figure for regular students was 36 percent while for Open Admissions categories it was 56 percent. Surprisingly, the statistics indicated that the four-year colleges retained or graduated more of the Open Admissions students than the two-year colleges, a finding that did not reflect experience elsewhere. Not surprisingly, perhaps, the figures indicated a close relationship between academic success defined as retention or graduation and high school averages. Similarly, it took longer for the Open Admissions students to generate college credits and graduate than regular students, a pattern similar to national averages. The most important statistics, however, relate to the findings regarding Open Admissions students, and these indicated as a projection that perhaps as many as 70 percent would not graduate from a unit of the City University.


    "The United States seems totally indifferent to our problems," charges French Foreign Minister Claude Cheysson, defending his Government's decision to defy President Reagan and proceed with construction of the Soviet gas pipeline. West German Chancellor Helmut Schmidt endorsed the French action and sounded a similar note. Washington's handling of the pipeline, he said, has "casta shadow over relations" between Europe and the United States," damaging confidence as regards future agreements.'' But it's not just the pipeline that has made a mockery of Versailles. Charges of unfair trade practices and threats of retaliation in a half-dozen industries are flying back and forth over the Atlantic-and the Pacific, too|min a worrisome crescendo. Businessmen, dismayed by the long siege of sluggish economic growth that has left some 30 million people in the West unemployed, are doing what comes naturally: pressuring politicians to restrain imports, subsidize exports, or both. Steelmakers in Bonn and Pittsburgh want help; so do auto makers in London and Detroit, textile, apparel and shoe manufacturers throughout the West and farmers virtually everywhere.     Democratic governments, the targets of such pressure, are worried about their own political fortunes and embarrassed by their failure to generate strong growth and lower unemployment. The temptation is strong to take the path of least resistance and tighten up on trade-even for a Government as devoted to the free market as Ronald Reagan's. In the past 18 months, Washington, beset by domestic producers, has raised new barriers against imports in autos, textiles and sugar. Steel is likely to be next. Nor is the United States alone. European countries, to varying degrees, have also sought to defend domestic markets or to promote exports through generous subsidies. . . .  The upcoming meeting, to consider trade policy for the 1980's, is surely well timed. "It has been suggested often that world trade policy is 'at a crossroads'|mbut such a characterization of the early 1980's may be reasonably accurate," says C. Fred Bergsten, a former Treasury official in the Carter Administration, now director of a new Washington think tank, the Institute for International Economics.     The most urgent question before the leaders of the industrial world is whether they can change the fractious atmosphere of this summer before stronger protective measures are actually put in place. So far, Mr. Bergsten says, words have outweighed deeds. The trade picture is dismal. World trade reached some $2 trillion a year in 1980 and hasn't budged since .In the first half of this year, Mr. Bergsten suspects that trade probably fell as the world economy stayed flat. But, according to his studies, increased protectionism is not the culprit for the slowdown in trade|mat least not yet. The culprit instead is slow growth and recession, and the resulting slump in demand for imports. . . .   But there are fresh problems today that could be severely damaging. Though tariffs and outright quotas are low after three rounds of intense international trade negotiations in the past two decades |mnew trade restraints, often bound up in voluntary agreements between countries to limit particular imports, have sprouted in recent years like mushrooms in a wet wood. Though the new protectionism is more subtle than the old-fashioned variety, it is no less damaging to economic efficiency and, ultimately, to prospects for world economic growth.     A striking feature is that the new protectionism has focused on the same limited sectors in most of the major industrial countries |mtextiles, steel, electronics, footwear, shipbuilding and autos. Similarly, it has concentrated on supply from Japan and the newly industrialized countries.     When several countries try to protect the same industries, the dealings become difficult. Take steel. Since 1977, the European Economic Community has been following a plan to eliminate excess steel capacity, using bilateral import quotas along the way to soften the blow to the steelworkers. The United States, responding to similar pressure at home and to the same problem of a world oversupplied with steel, introduced a "voluntary" quota system in 1969, and, after a brief period of no restraint, developed a complex trigger price mechanism in 1978.


Each spring vast flocks of songbirds migrate north from Mexico to the United States, but since the 1960s their numbers have fallen by up to 50 percent. Frog populations around the world have declined in recent years. The awe-inspiring California condor survives today only because of breeding programs in zoos. Indeed, plant and animal species are disappearing from the earth at an alarming rate, and many scientists believe that human activity is largely responsible. Biodiversity, or the biological variety that thrives in a healthy ecosystem, became the focus of intense international concern during the 1990s. If present trends continue, Harvard University biologist Edward O. Wilson, one of the leading authorities on biodiversity, estimates that the world could lose 20 percent of all existing species by the year 2020. Biodiversity has become such a vogue word that academics have begun to take surveys of scientists to find out what they mean by it. For Adrian Forsyth, director of conservation biology for Conservation International, biodiversity is the totality of biological diversity from the molecular level to the ecosystem level. That includes the distinct species of all living things on Earth. Scientists have identified 1.4 million species, but no one knows how many actually exist, especially in hard-to-reach areas such as the deep heart of a rain forest or the bottom of an ocean. Biologists believe there may be 5 million to 10 million species, though some estimates run as high as 100 million. Habitat destruction as a result of people's use or development of land is considered the leading threat to biodiversity. For example, habitat loss is thought to be causing severe drops in the populations of migratory songbirds in North America, perhaps as much as 50 percent since the 1960s. Scientists studying songbirds that migrate from warm winter quarters in the southern United States, Mexico, and Central America to summer nesting grounds in the northern United States and Canada have found that the birds are losing habitat at both ends of their long journey. In the tropics forests are being cleared for agriculture, and in the north they are being cut down for roads, shopping centers, and housing subdivisions. As a result, bird censuses in the United States have shown a 33 percent decline in the population of rose-breasted grosbeaks since 1980. Another cause of the decline in biodiversity is the introduction of new species. Sometimes a new species is brought to an area intentionally, but sometimes it happens accidentally. In Illinois the native mussel populations in the Illinois River have fallen drastically since the 1993 summer flooding washed large numbers of zebra mussels into the river from Lake Michigan. Zebra mussels, native to the Caspian Sea, were inadvertently introduced to the Great Lakes, probably in the mid-1980s, by oceangoing cargo ships. Pollution is yet another threat to plants and animals. The St. Lawrence River, one habitat of the endangered beluga whale, drains the Great Lakes, historically one of the most industrialized regions in the world. The whales now have such high levels of toxic chemicals stored in their bodies that technically they qualify as hazardous waste under Canadian law. The effects of pollution can be very subtle and hard to prove because often the toxins do not kill animals outright but instead impair their natural defenses against disease or their ability to reproduce. Habitat loss is thought to be one reason for the decline in frog populations worldwide, because frogs live in wetlands, many of which have been filled in over the years for agriculture and development. But researchers theorize that another possible cause is increased exposure to ultraviolet radiation from the Sun as a result of the thinning of the atmosphere's ozone layer; the increased dose of ultraviolet radiation may be suppressing the frogs' immune systems, making them more vulnerable to a wide range of diseases. Of all the causes of species extinction and habitat loss, the one that seems to be at the heart of the matter is the size of the population of just one species, Homo sapiens. In 1994 the world population was estimated at more than 5.6 billion, more than double the number in 1950. With a larger population come increased demands for food, clothing, housing, and energy, all of which will likely lead to greater habitat destruction, more pollution, and less biological diversity. The number of people in the world continues to grow, but there is evidence that the population of the industrialized nations has more or less stabilized. That's important because although the population of these countries makes up only 25 percent of the world total, the developed world consumes 75 percent of the world's resources. The United Nations is treating the increase in the world's population as a serious matter. A 1994 UN-sponsored conference on population produced a 113-page plan to stabilize the number of people in the world at 7.27 billion by 2015. Otherwise, the UN feared, world population could mushroom to 12.5 billion by 2050.


Although new and effective AIDS drugs have brought hope to many HIV-infected persons, a number of social and ethical dilemmas still confront researchers and public-health officials. The latest combination drug therapies are far too expensive for infected persons in the developing world—particularly in sub-Saharan Africa, where the majority of AIDS deaths have occurred. In these regions, where the incidence of HIV infection continues to soar, the lack of access to drugs can be catastrophic. In 1998, responding to an international outcry, several pharmaceutical firms announced that they would slash the price of AIDS drugs in developing nations by as much as 75 percent. However, some countries argued that drug firms had failed to deliver on their promises of less expensive drugs. In South Africa government officials developed legislation that would enable the country to override the patent rights of drug firms by importing cheaper generic medicines made in India and Thailand to treat HIV infection. In 1998, 39 pharmaceutical companies sued the South African government on the grounds that the legislation violated international trade agreements. Pharmaceutical companies eventually dropped their legal efforts in April 2001, conceding that South Africa’s legislation did comply with international trading laws. The end of the legal battle was expected to pave the way for other developing countries to gain access to more affordable AIDS drugs. AIDS research in the developing world has raised ethical questions pertaining to the clinical testing of new therapies and potential vaccines. For example, controversy erupted over 1997 clinical trials that tested a shorter course of Zidovudine (or AZT) therapy in HIV-infected pregnant women in developing countries. Earlier studies had shown that administering AZT to pregnant women for up to six months prior to birth could cut mother-to-child transmission of HIV by up to two-thirds. The treatment’s $800 cost, however, made it too expensive for patients in developing nations. The controversial 1997 clinical trials, which were conducted in Thailand and other regions in Asia and Africa, tested a shorter course of AZT treatment, costing only $50. Some pregnant women received AZT, while others received a placebo—a medically inactive substance often used in drug trials to help scientists determine the effectiveness of the drug under study. Ultimately the shorter course of AZT treatment proved to be successful and is now standard practice in a growing number of developing nations. However, at the time of the trials, critics charged that using a placebo on HIV-infected pregnant women—when AZT had already been shown to prevent mother-to-child transmission—was unethical and needlessly placed babies at fatal risk. Defenders of the studies countered that a placebo was necessary to accurately gauge the effectiveness of the AZT short-course treatment. Some critics speculated whether such a trial, while apparently acceptable in the developing nations of Asia and Africa, would ever have been viewed as ethical, or even permissible, in a developed nation like the United States. Similar ethical questions surround the testing of AIDS vaccines in developing nations. Vaccines typically use weakened or killed HIV to spark antibody production. In some vaccines, these weakened or killed viruses have the potential to cause infection and disease. Critics questioned whether it is ethical to place all the risk on test subjects in developing regions such as sub-Saharan Africa, where a person infected by a vaccine would have little or no access to medical care. At the same time, with AIDS causing up to 5,500 deaths a day in Africa, others feel that developing nations must pursue any medical avenue for stemming the epidemic and protecting people from the virus. For the struggling economies of some developing nations, AIDS has brought yet another burden: AIDS tends to kill young adults in the prime of their lives—the primary breadwinners and caregivers in families. According to figures released by the United Nations in 1999, AIDS has shortened the life expectancy in some African nations by an average of seven years. In Zimbabwe, life expectancy has dropped from 61 years in 1993 to 49 in 1999. The next few decades may see it fall as low as 41 years. Upwards of 11 million children have been orphaned by the AIDS epidemic. Those children who survive face a lack of income, a higher risk of malnutrition and disease, and the breakdown of family structure. In Africa, the disease has had a heavy impact on urban professionals—educated, skilled workers who play a critical role in the labor force of industries such as agriculture, education, transportation, and government. The decline in the skilled workforce has already damaged economic growth in Africa, and economists warn of disastrous consequences in the future. The social, ethical, and economic effects of the AIDS epidemic are still being played out, and no one is certain what the consequences will be. Despite the many grim facts of the AIDS epidemic, however, humanity is armed with proven, effective weapons against the disease: knowledge, education, prevention, and the ever-growing store of information about the virus’s actions.


The late 1980s found the landscape of popular music in America dominated by a distinctive style of rock and roll known as glam rock or hair metal—so called because of the over-styled hair, makeup, and wardrobe worn by the genre’s ostentatious rockers. Bands like Poison, White snake, and Mötley Crüe popularized glam rock with their power ballads and flashy style, but the product had worn thin by the early 1990s. Just as superficial as the 80s,glam rockers were shallow, short on substance, and musically inferior. In 1991, a Seattle-based band called Nirvana shocked the corporate music industry with the release of its debut single, “Smells Like Teen Spirit,” which quickly became a huge hit all over the world. Nirvana’s distorted, guitar-laden sound and thought-provoking lyrics were the antithesis of glam rock, and the youth of America were quick to pledge their allegiance to the brand-new movement known as grunge. Grunge actually got its start in the Pacific Northwest during the mid-1980s. Nirvana had simply mainstreamed a sound and culture that got its start years before with bands like Mudhoney, Soundgarden, and Green River. Grunge rockers derived their fashion sense from the youth culture of the Pacific Northwest: a melding of punk rock style and outdoors clothing like flannels, heavy boots, worn out jeans, and corduroys. At the height of the movement’s popularity, when other Seattle bands like Pearl Jam and Alice in Chains were all the rage, the trappings of grunge were working their way to the height of American fashion. Like the music, the teenagers were fast to embrace the grunge fashion because it represented defiance against corporate America and shallow pop culture. The popularity of grunge music was ephemeral; by the mid- to late-1990s, its influence upon American culture had all but disappeared, and most of its recognizable bands were nowhere to be seen on the charts. The heavy sound and themes of grunge were replaced on the radio waves by boy bands like the Backstreet Boys, and the bubble gum pop of Britney Spears and Christina Aguilera. There are many reasons why the Seattle sound faded out of the mainstream as quickly as it rocketed to prominence, but the most glaring reason lies at the defiant, anti-establishment heart of the grunge movement itself. It is very hard to buck the trend when you are the one setting it, and many of the grunge bands were never comfortable with the fame that was thrust upon them. Ultimately, the simple fact that many grunge bands were so against mainstream rock stardom eventually took the movement back to where it started: underground. The fickle American mainstream public, as quick as they were to hop on to the grunge bandwagon, were just as quick to hop off and move on to something else.



Solar storms are natural events that occur when high energy particles from the sun hit the earth. They take place when the sun releases energy in the form of outbursts or eruptions. Such outbursts are also called solar flares. Energy is set free and transported to outer space.

Solar storms contain gas and other matter and can travel at extremely high speeds. When such particles hit the Earth or any other planet with an atmosphere they cause a geomagnetic storm - a disturbance in the magnetic field that surrounds our planet. Normally such outbursts are not dangerous. They are the cause of polar lights - bright, colorful lights in the skies in the northern regions. They may, however, endanger us in other ways. Such outbursts of the sun’s energy can cause communication problems, interfere with satellite reception or lead to incorrect GPS readings. In the past they have even shut down electric power grids. The most damaging events happened in the 19th century when solar storms started fires in North America and Europe. They caused auroras as far south as the equator. Luckily the world did not have the high technological standard we have today. Such forceful eruptions could do much more damage today. An American investigation in 2008 showed that extreme solar storms could cause billions of dollars in damage. Several organizations around the world monitor the sun’s activity and the disturbances that occur in its atmosphere. They also have detectors that show variations in the Earth’s magnetic field. Solar cycles repeat themselves every 11 years. Right now the Earth is experiencing the most severe solar storm since 2003. Sky watchers in Canada and Scandinavia are already reporting sightings of more northern lights than usual. As the sun is currently becoming more active we will see more and more solar flares the next few years. However the solar cycle we are in at the moment is relatively quiet compared to others over the last decades. The last major problems caused by solar storms occurred in 1994 when communications satellites over Canada malfunctioned and power in many parts of the country went out for a few hours. When solar storms pass through the earth’s atmosphere radiation levels are higher for a few days Airlines are especially worried about these outbursts of radiation because long distance flights use polar routes, an area where disruptions are most severe. During such storms there are periods when the crew cannot communicate with ground control stations. Astronauts orbiting the earth in the International Space Station may also be in danger because radiation levels are much higher than normal. Outbursts of solar energy even affect animals which are sensitive to changes in Earth’s magnetic field. During such events they lose orientation and get lost.


Although the overall situation of women has improved in the past decades they still are discriminated against when it comes to work. They get paid less than men for the same work that they do and in some cases do not have the same opportunities as men to reach high-ranking positions. However this is starting to change. Especially organizations, like the United Nations or UNESCO are giving women better opportunities. Many European Union countries have introduced quotas for women in high-ranking positions. But in other areas they are still second-class citizens In industrial countries of the developed world they have become more than equal to males. In the past four decades the proportion of women who have paid jobs has gone up from below half to 64 %. There are, however, differences from country to country. While in Scandinavian countries almost three quarters of all women have a job, the number of females on the labor force in southern and eastern Europe is only about 50%. The role of women has changed drastically during the 20th century. In the early 1900s female workers were employed mainly in factories or worked as servants. In the course of time they got more educated and started working as nurses, teachers, even doctors and lawyers In the 1960s, women, for the first time, were able to actively plan their families. Birth control pills and other contraceptives made it possible for women to have a career, family or even both. Many went to high school and college and sought a job. In the 1970s women in developed countries started to become a major part of the workforce More females in the workforce have brought along many advantages for industries and employers. They have a wider variety of workers to choose from and women often have better ideas and make positive contributions to how things are done. Additional workers also help the economy thrive. They spend money and contribute to the growth of national income. In many countries they provide extra income for a country whose population is getting older and older. In America, economists think that the GDP is 25% larger than it would be without women on the workforce. According to a new survey about one billion women are expected to enter the workforce in the next decade. This should not only contribute to economic growth but also improve gender equality. Even though women should be treated equally, they still get, on average, about 18% less pay for the same work. Females suffer from inequalities in other areas too. Many women wish to start a career and search for fulfillment outside family life. However, in most cases it is harder for them to get to the absolute top than it is for men. Only about 3% of the top CEOs are women. While the situation of women in developed countries may have come to a standstill, females in Asian countries, like China, Singapore or South Korea are experiencing a boom in good job offers. More and more of them are reaching top positions. One of the issues that still are hard for a woman to manage is child care. Not only do they spend more on education and baby sitters, especially single mothers who raise a child alone find it nearly impossible to reach a top position at the same time. Even if a woman has a working husband, men are not keen on taking leave to care for the baby. Most men still consider this a woman’s job Nevertheless, there are countries where women and men lead equal lives and also find equal opportunity. Among Scandinavian countries, which generally offer many opportunities for women, Iceland ranks first. The United States is currently in 19th place, up from the 31st spot, mainly because President Obama has offered women more jobs in government offices. At the bottom of the list are developing countries like Yemen and Pakistan. 


Rice is one of the world’s most important food crops . It is a grain, like wheat and corn. Almost all the people who depend on rice for their food live in Asia. Young rice plants are bright green. After planting, the grain is ripe about 120 to 180 days later. It turns golden yellow during the time of harvest . In some tropical countries rice can be harvested up to three times a year. Each rice plant carries hundreds or thousands of kernels . A typical rice kernel is 6—10 mm long and has four parts: The hull is the hard outer part which is not good to eat. The bran layers protect the inner parts of the kernel They have vitamins and minerals in them. The endosperm makes up most of the kernel. It has a lot of starch in it. The embryo is a small part from which a new rice plant can grow. Rice grows best in tropical regions. It needs a lot of water and high temperatures. It grows on heavy, muddy soils that can hold water. In many cases farmers grow rice in paddies. These are fields that have dirt walls around them to keep the water inside. The fields are flooded with water and seeds or small rice plants are placed into the muddy soil . In southeast Asia and other developing countries farmers do most of the work by hand. They use oxen or water buffaloes to pull the ploughs . In industrialized countries work is done mostly by machines. Two or three weeks before the harvest begins water is pumped out of the fields. The rice is cut and the kernels are separated from the rest of the plant. The wet kernels are laid on mats to dry in the sun. better. Sometimes brown rice , in which the bran layers remain, is produced . Then it is packaged and sold. Rice gives your body energy in the form of carbohydrates . It also has vitamin B and other minerals in it. Rice has little fat and is easy to digest . Rice is in many other foods as well. It is in breakfast cereals , frozen and baby foods and soup. Breweries use rice to make beer. In Japan , rice kernels are used to make an alcoholic drink Most rice is grown in lowland regions but about one fifth of the world’s rice is upland rice , which grows on terraces in the mountains. The world’s farmers grow more than 700 million tons a year. 90 % of the rice production comes from Asia. China and India are the world’s biggest producers. In these countries rice is planted in the big river plains of the Ganges and Yangtze. Almost all of Asia’s rice is eaten by the population there. Sometimes they don’t even have enough to feed their own people. Other counties , like the USA, produce rice for export.


In the past thirty years, Americans’ consumption of restaurant and take-out food has doubled. The result, according to many health watchdog groups, is an increase in overweight and obesity. Almost 60 million Americans are obese, costing $117 billion each year in health care and related costs. Members of Congress have decided they need to do something about the obesity epidemic. A bill was recently introduced in the House that would require restaurants with twenty or more locations to list the nutritional content of their food on their menus. A Senate version of the bill is expected in the near future. Our legislators point to the trend of restaurants’ marketing larger meals at attractive prices. People order these meals believing that they are getting a great value, but what they are also getting could be, in one meal, more than the daily recommended allowances of calories, fat, and sodium. The question is, would people stop “supersizing,” or make other healthier choices if they knew the nutritional content of the food they’re ordering? Lawmakers think they would, and the gravity of the obesity problem has caused them to act to change menus. The Menu Education and Labeling, or MEAL, Act, would result in menus that look like the nutrition facts panels found on food in supermarkets. Those panels are required by the 1990 Nutrition Labeling and Education Act, which exempted restaurants. The new restaurant menus would list calories, fat, and sodium on printed menus, and calories on menu boards, for all items that are offered on a regular basis (daily specials don’t apply). But isn’t this simply asking restaurants to state the obvious? Who isn’t aware that an order of supersize fries isn’t health food? Does anyone order a double cheeseburger thinking they’re being virtuous? Studies have shown that it’s not that simple. In one, registered dieticians couldn’t come up with accurate estimates of the calories found in certain fast foods. Who would have guessed that a milk shake, which sounds pretty healthy (it does contain milk, after all) has more calories than three McDonald’s cheeseburgers? Or that one chain’s chicken breast sandwich, another better-sounding alternative to a burger, contains more than half a day’s calories and twice the recommended daily amount of sodium? Even a fast-food coffee drink, without a doughnut to go with it, has almost half the calories needed in a day. The restaurant industry isn’t happy about the new bill. Arguments against it include the fact that diet alone is not the reason for America’s obesity epidemic. A lack of adequate exercise is also to blame. In addition, many fast food chains already post nutritional information on their websites, or on posters located in their restaurants. Those who favor the MEAL Act, and similar legislation, say in response that we must do all we can to help people maintain a healthy weight. While the importance of exercise is undeniable, the quantity and quality of what we eat must be changed. They believe that if we want consumers to make better choices when they eat out, nutritional information must be provided where they are selecting their food. Restaurant patrons are not likely to have memorized the calorie counts they may have looked up on the Internet, nor are they going to leave their tables, or a line, to check out a poster that might be on the opposite side of the restaurant.



In 1904, the U.S. Patent Office granted a patent for a board game called “The Landlord’s Game,” which was invented by a Virginia Quaker named Lizzie Magie. Magie was a follower of Henry George, who started a tax movement that supported the theory that the renting of land and real estate produced an unearned increase in land values that profited a few individuals (landlords) rather than the majority of the people (tenants). George proposed a single federal tax based on land ownership; he believed this tax would weaken the ability to form monopolies, encourage equal opportunity, and narrow the gap between rich and poor. Lizzie Magie wanted to spread the word about George’s proposal, making it more understandable to a majority of people who were basically unfamiliar with economics. As a result, she invented a board game that would serve as a teaching device. The Landlord’s Game was intended to explain the evils of monopolies, showing that they repressed the possibility for equal opportunity. Her instructions read in part: “The object of this game is not only to afford amusement to players, but to illustrate to them how, under the present or prevailing system of land tenure, the landlord has an advantage over other enterprisers, and also how the single tax would discourage speculation.” The board for the game was painted with forty spaces around its perimeter, including four railroads, two utilities, twenty-two rental properties, and a jail. There were other squares directing players to go to jail, pay a luxury tax, and park. All properties were available for rent, rather than purchase. Magie’s invention became very popular, spreading through word of mouth, and altering slightly as it did. Since it was not manufactured by Magie, the boards and game pieces were homemade. Rules were explained and transmuted, from one group of friends to another. There is evidence to suggest that The Landlord’s Game was played at Princeton, Harvard, and the University of Pennsylvania. In 1924, Magie approached George Parker (President of Parker Brothers) to see if he was interested in purchasing the rights to her game. Parker turned her down, saying that it was too political. The game increased in popularity, migrating north to New York state, west to Michigan, and as far south as Texas. By the early 1930s, it reached Charles Darrow in Philadelphia. In 1935, claiming to be the inventor, Darrow got a patent for the game, and approached Parker Brothers. This time, the company loved it, swallowed Darrow’s prevarication, and not only purchased his patent, but paid him royalties for every game sold. The game quickly became Parker Brothers’ bestseller, and made the company, and Darrow, millions of dollars. When Parker Brothers found out that Darrow was not the true inventor of the game, they wanted to protect their rights to the successful game, so they went back to Lizzie Magie, now Mrs. Elizabeth Magie Phillips of Clarendon, Virginia. She agreed to a payment of $500 for her patent, with no royalties, so she could stay true to the original intent of her game’s invention. She therefore required in return that Parker Brothers manufacture and market The Landlord’s Game in addition to Monopoly. However, only a few hundred games were ever produced. Monopoly went on to become the world’s bestselling board game, with an objective that is the exact opposite of the one Magie intended: “The idea of the game is to buy and rent or sell property so profitably that one becomes the wealthiest player and eventually monopolist. The game is one of shrewd and amusing trading and excitement.”