The fossil remains of the first flying vertebrates, the pterosaurs, have intrigued paleontologists for more than two centuries. How such large creatures, which weighed in some cases as much as a piloted hang-glider and had wingspans from 8 to 12 meters, solved the problems of powered flight, and exactly what these creatures were--reptiles or birds-are among the questions scientists have puzzled over. Perhaps the least controversial assertion about the pterosaurs is that they were reptiles. Their skulls, pelvises, and hind feet are reptilian. The anatomy of their wings suggests that they did not evolve into the class of birds. In pterosaurs a greatly elongated fourth finger of each forelimb supported a wing like membrane. The other fingers were short and reptilian, with sharp claws. In birds the second finger is the principal strut of the wing, which consists primarily of feathers. If the pterosaurs walked on all fours, the three short fingers may have been employed for grasping. When a pterosaur walked or remained stationary, the fourth finger, and with it the wing, could only turn upward in an extended inverted V-shape along each side of the animal’s body. The pterosaurs resembled both birds and bats in their overall structure and proportions. This is not surprising because the design of any flying vertebrate is subject to aerodynamic constraints. Both the pterosaurs and the birds have hollow bones, a feature that represents a savings in weight. In the birds, however, these bones are reinforced more massively by internal struts. Although scales typically cover reptiles, the pterosaurs probably had hairy coats. T.H. Huxley reasoned that flying vertebrates must have been warm- blooded because flying implies a high rate of metabolism, which in turn implies a high internal temperature. Huxley speculated that a coat of hair would insulate against loss of body heat and might streamline the body to reduce drag in flight. The recent discovery of a pterosaur specimen covered in long, dense, and relatively thick hair like fossil material was the first clear evidence that his reasoning was correct. Efforts to explain how the pterosaurs became air- borne have led to suggestions that they launched them- selves by jumping from cliffs, by dropping from trees or even by rising into light winds from the crests of waves. Each hypothesis has its difficulties. The first wrongly assumes that the pterosaurs’ hind feet resembled a bat’s and could serve as hooks by which the animal could hang in preparation for flight. The second hypothesis seems unlikely because large pterosaurs could not have landed in trees without damaging their wings. The third calls for high waves to channel updrafts. The wind that made such waves however, might have been too strong for the pterosaurs to control their flight once airborne.
Literature is at once the most intimate and the most articulate of the arts. It cannot impart its effect through the senses or the nerves as the other arts can; it is beautiful only through the intelligence; it is the mind speaking to the mind; until it has been put into absolute terms, of an invariable significance, it does not exist at all. It cannot awaken this emotion in one, and that in another; if it fails to express precisely the meaning of the author, if it does not say ~him~, it says nothing, and is nothing. So that when a poet has put his heart, much or little, into a poem, and sold it to a magazine, the scandal is greater than when a painter has sold a picture to a patron, or a sculptor has modelled a statue to order. These are artists less articulate and less intimate than the poet; they are more exterior to their work; they are less personally in it; they part with less of themselves in the dicker. It does not change the nature of the case to say that Tennyson and Longfellow and Emerson sold the poems in which they couched the most mystical messages their genius was charged to bear mankind. They submitted to the conditions which none can escape; but that does not justify the conditions, which are none the less the conditions of hucksters because they are imposed upon poets. If it will serve to make my meaning a little clearer, we will suppose that a poet has been crossed in love, or has suffered some real sorrow, like the loss of a wife or child. He pours out his broken heart in verse that shall bring tears of sacred sympathy from his readers, and an editor pays him a hundred dollars for the right of bringing his verse to their notice. It is perfectly true that the poem was not written for these dollars, but it is perfectly true that it was sold for them. The poet must use his emotions to pay his provision bills; he has no other means; society does not propose to pay his bills for him. Yet, and at the end of the ends, the unsophisticated witness finds the transaction ridiculous, finds it repulsive, finds it shabby. Somehow he knows that if our huckstering civilization did not at every moment violate the eternal fitness of things, the poet's song would have been given to the world, and the poet would have been cared for by the whole human brotherhood, as any man should be who does the duty that every man owes it. The instinctive sense of the dishonor which money-purchase does to art is so strong that sometimes a man of letters who can pay his way otherwise refuses pay for his work, as Lord Byron did, for a while, from a noble pride, and as Count Tolstoy has tried to do, from a noble conscience. But Byron's publisher profited by a generosity which did not reach his readers; and the Countess Tolstoy collects the copyright which her husband foregoes; so that these two eminent instances of protest against business in literature may be said not to have shaken its money basis. I know of no others; but there maybe many that I am culpably ignorant of. Still, I doubt if there are enough to affect the fact that Literature is Business as well as Art, and almost as soon. At present business is the only human solidarity; we are all bound together with that chain, whatever interests and tastes and principles separate us.
Some observers have attributed the dramatic growth in temporary employment that occurred in the United States during the 1980’s to increased participation in the workforce by certain groups, such as first-time or reentering workers, who supposedly prefer such arrangements. However, statistical analyses reveal that demographic changes in the workforce did not correlate with variations in the total number of temporary workers. Instead, these analyses suggest that factors affecting employers account for the rise in temporary employment. One factor is product demand: temporary employment is favored by employers who are adapting to fluctuating demand for products while at the same time seeking to reduce overall labor costs. Another factor is labor’s reduced bargaining strength, which allows employers more control over the terms of employment. Given the analyses, which reveal that growth in temporary employment now far exceeds the level explainable by recent workforce entry rates of groups said to prefer temporary jobs, firms should be discouraged from creating excessive numbers of temporary positions. Government policymakers should consider mandating benefit coverage for temporary employees, promoting pay equity between temporary and permanent workers, assisting labor unions in organizing temporary workers, and encouraging firms to assign temporary jobs primarily to employees who explicitly indicate that preference
No very satisfactory account of the mechanism that caused the formation of the ocean basins has yet been given. The traditional view supposes that the upper mantle of the earth behaves as a liquid when it is subjected to small forces for long periods and that differences in temperature under oceans and continents are sufficient to produce convection in the mantle of the earth with rising convection currents under the midocean ridges and sinking currents under the continents. Theoretically, this convection would carry the continental plates along as though they were on a conveyor belt and would provide the forces needed to produce the split that occurs along the ridge. This view may be correct: it has the advantage that the currents are driven by temperature differences that themselves depend on the position of the continents. Such a back-coupling, in which the position of the moving plate has an impact on the forces that move it, could produce complicated and varying motions. On the other hand, the theory is implausible because convection does not normally occur along lines. and it certainly does not occur along lines broken by frequent offsets or changes in direction, as the ridge is. Also it is difficult to see how the theory applies to the plate between the Mid-Atlantic Ridge and the ridge in the Indian Ocean. This plate is growing on both sides, and since there is no intermediate trench, the two ridges must be moving apart. It would be odd if the rising convection currents kept exact pace with them. An alternative theory is that the sinking part of the plate, which is denser than the hotter surrounding mantle, pulls the rest of the plate after it. Again it is difficult to see how this applies to the ridge in the South Atlantic, where neither the African nor the American plate has a sinking part. Another possibility is that the sinking plate cools the neighboring mantle and produces convection currents that move the plates. This last theory is attractive because it gives some hope of explaining the enclosed seas, such as the Sea of Japan. These seas have a typical oceanic floor, except that the floor is overlaid by several kilo- meters of sediment. Their floors have probably been sinking for long periods. It seems possible that a sinking current of cooled mantle material on the upper side of the plate might be the cause of such deep basins. The enclosed seas are an important feature of the earth’s surface, and seriously require explanation in because, addition to the enclosed seas that are developing at present behind island arcs, there are a number of older ones of possibly similar origin, such as the Gulf of Mexico, the Black Sea, and perhaps the North Sea.
Japanese firms have achieved the highest levels of manufacturing efficiency in the world automobile industry. Some observers of Japan have assumed that Japanese firms use the same manufacturing equipment and techniques as United States firms but have benefited from the unique characteristics of Japanese employees and the Japanese culture. However, if this were true, then one would expect Japanese auto plants in the United States to perform no better than factories run by United States companies. This is not the case, Japanese-run automobile plants located in the United States and staffed by local workers have demonstrated higher levels of productivity when compared with factories owned by United States companies. Other observers link high Japanese productivity to higher levels of capital investment per worker. But a historical perspective leads to a different conclusion. When the two top Japanese automobile makers matched and then doubled United States productivity levels in the mid-sixties, capital investment per employee was comparable to that of United States firms. Furthermore, by the late seventies, the amount of fixed assets required to produce one vehicle was roughly equivalent in Japan and in the United States. Since capital investment was not higher in Japan, it had to be other factors that led to higher productivity. A more fruitful explanation may lie with Japanese production techniques. Japanese automobile producers did not simply implement conventional processes more effectively: they made critical changes in United States procedures. For instance, the mass-production philosophy of United States automakers encouraged the production of huge lots of cars in order to utilize fully expensive, component-specific equipment and to occupy fully workers who have been trained to execute one operation efficiently. Japanese automakers chose to make small-lot production feasible by introducing several departures from United States practices, including the use of flexible equipment that could be altered easily to do several different production tasks and the training of workers in multiple jobs. Automakers could schedule the production of different components or models on single machines, thereby eliminating the need to store the buffer stocks of extra components that result when specialized equipment and workers are kept constantly active.
Milankovitch proposed in the early twentieth century that the ice ages were caused by variations in the Earth’s orbit around the Sun. For sometime this theory was considered untestable, largely because there was no sufficiently precise chronology of the ice ages with which the orbital variations could be matched. To establish such a chronology it is necessary to determine the relative amounts of land ice that existed at various times in the Earth’s past. A recent discovery makes such a determination possible: relative land-ice volume for a given period can be deduced from the ratio of two oxygen isotopes, 16 and 18, found in ocean sediments. Almost all the oxygen in water is oxygen 16, but a few molecules out of every thousand incorporate the heavier isotope 18. When an ice age begins, the continental ice sheets grow, steadily reducing the amount of water evaporated from the ocean that will eventually return to it. Because heavier isotopes tend to be left behind when water evaporates from the ocean surfaces, the remaining ocean water becomes progressively enriched in oxygen 18. The degree of enrichment can be determined by analyzing ocean sediments of the period, because these sediments are composed of calcium carbonate shells of marine organisms, shells that were constructed with oxygen atoms drawn from the sur- rounding ocean. The higher the ratio of oxygen 18 to oxygen 16 in a sedimentary specimen, the more land ice there was when the sediment was laid down. As an indicator of shifts in the Earth’s climate, the isotope record has two advantages. First, it is a global record: there is remarkably little variation in isotope ratios in sedimentary specimens taken from different continental locations. Second, it is a more continuous record than that taken from rocks on land. Because of these advantages, sedimentary evidence can be dated with sufficient accuracy by radiometric methods to establish a precise chronology of the ice ages. The dated isotope record shows that the fluctuations in global ice volume over the past several hundred thousand years have a pattern: an ice age occurs roughly once every 100,000 years. These data have established a strong connection between variations in the Earth’s orbit and the periodicity of the ice ages. However, it is important to note that other factors, such as volcanic particulates or variations in the amount of sunlight received by the Earth, could potentially have affected the climate. The advantage of the Milankovitch theory is that it is testable: changes in the Earth’s orbit can be calculated and dated by applying Newton’s laws of gravity to progressively earlier configurations of the bodies in the solar system. Yet the lack of information about other possible factors affecting global climate does not make them unimportant.
A polytheist always has favorites among the gods, determined by his own temperament, age, and condition, as well as his own interest, temporary or permanent. If it is true that everybody loves a lover, then Venus will be a popular deity with all. But from lovers she will elicit special devotion. In ancient Rome, when a young couple went out together to see a procession or other show, they would of course pay great respect to Venus, when her image appeared on the screen. Instead of saying, "Isn't love wonderful?" they would say, "Great art thou, O Venus." In a polytheistic society you could tell a good deal about a person's frame of mind by the gods he favored, so that to tell a girl you were trying to woo that you thought Venus overrated was hardly the way to win her heart. But in any case, a lovesick youth or maiden would be spontaneously supplicating Venus. The Greeks liked to present their deities in human form; it was natural to them to symbolize the gods as human beings glorified, idealized. But this fact is also capable of misleading us. We might suppose that the ancients were really worshipping only themselves; that they were, like Narcissus, beholding their own image in a pool, so that their worship was ~anthropocentric~ (man-centered) rather than ~theocentric~ (god-centered). We are in danger of assuming that they were simply constructing the god in their own image. This is not necessarily so. The gods must always be symbolized in one form or another. To give them a human form is one way of doing this, technically called ~anthropomorphism~ (from the Greek ~anthropos~, a man, and ~morphé~,form). People of certain temperaments and within certain types of culture seem to be more inclined to it than are others. It is, however, more noticeable in others than in oneself, and those who affect to despise it are sometimes conspicuous for their addiction to it. A German once said an Englishman's idea of God is an Englishman twelve feet tall. Such disparagement of anthropomorphism occurred in the ancient world, too. The Celts, for instance, despised Greek practice in this matter, preferring to use animals and other such symbols. The Egyptians favored more abstract and stylized symbols, among which a well-known example is the solar disk, a symbol of Rà, the sun-god. Professor C. S. Lewis tells of an Oxford undergraduate he knew who, priggishly despising the conventional images of God, thought he was overcoming anthropomorphism by thinking of the Deity as infinite vapor or smoke. Of course even the bearded-old-man image can be a better symbol of Deity than ever could be the image, even if this were psychologically possible, of an unlimited smog. What is really characteristic of all polytheism, however, is not the worship of idols or humanity or forests or stars; it is, rather, the worship of innumerable ~powers~ that confront and affect us. The powers are held to be valuable in themselves; that is why they are to be worshipped. But the values conflict. The gods do not cooperate, so you have to play them off against each other. Suppose you want rain. You know of two gods, the dry-god who sends drought and the wet-god who sends rain. You do not suppose that you can just pray to the wet-god to get busy, and simply ignore the dry-god. If you do so, the latter may be offended, so that no matter how hard the wet-god tries to oblige you, the dry-god will do his best to wither everything. Because both gods are powerful you must take both into consideration, begging the wet-god to be generous and beseeching the dry-god to stay his hand.
A newly issued report reveals in facts and figures what should have been known in principle, that quite a lot of business companies are going to go under during the coming decade, as tariff walls are progressively dismantled. Labor and capital valued at $12 billion are to be made idle through the impact of duty-free imports. As a result, 35,000 workers will be displaced. Some will move to other jobs and other departments within the same firm. Around 15,000 will have to leave the firm now employing them and work elsewhere. The report is measuring exclusively the influence of free trade with Europe. The authors do not take into account the expected expansion of production over the coming years. On the other hand, they are not sure that even the export predictions they make will be achieved. For this presupposes that a suitable business climate lets the pressure to increase productivity materialize. There are two reasons why this scenario may not happen. The first one is that industry on the whole is not taking the initiatives necessary to adapt fully to the new price situation it will be facing as time goes by. This is another way of saying that the manufacturers do not realize what lies ahead. The government is to blame for not making the position absolutely clear. It should be saying that in ten years' time tariffs on all industrial goods imported from Europe will be eliminated. There will be no adjustment assistance for manufacturers who cannot adapt to this situation. The second obstacle to adjustment is not stressed in the same way in the report; it is the attitude of the service sector. Not only are service industries unaware that the Common Market treaty concerns them too, they are artificially insulated from the physical pressures of international competition. The manufacturing sector has been forced to apply its nose to the grindstone for some time now, by the increasingly stringent import-liberalization program. The ancillary services on which the factories depend show a growing indifference to their work obligations. They seem unaware that overmanned ships, underutilized container equipment in the ports, and repeated work stoppages slow the country's attempts to narrow the trade gap. The remedy is to cut the fees charged by these services so as to reduce their earnings min exactly the same way that earnings in industrial undertakings are reduced by the tariff reduction program embodied in the treaty with the European Community. There is no point in dismissing 15,000 industrial workers from their present jobs during the coming ten years if all the gain in productivity is wasted by costly harbor, transport, financial, administrative and other services. The free trade treaty is their concern as well. Surplus staff should be removed, if need be, from all workplaces, not just from the factories. Efficiency is everybody's business.
The fundamental objectives of sociology are the same as those of science generally discovery and explanation. To ~discover~ the essential data of social behavior and the connections among the data is the first objective of sociology. To ~explain~ the data and the connections is the second and larger objective. Science makes its advances in terms of both of these objectives. Sometimes it is the discovery of a new element or set of elements that marks a major breakthrough in the history of a scientific discipline. Closely related to such discovery is the discovery of relationships of data that had never been noted before. All of this is, as we know, of immense importance in science. But the drama of discovery, in this sense, can sometimes lead us to overlook the greater importance of explanation of what is revealed by the data. Sometimes decades, even centuries, pass before known connections and relationships are actually explained. Discovery and explanation are the two great interpenetrating, interacting realms of science. The order of reality that interests the scientists is the ~empirical~ order, that is, the order of data and phenomena revealed to us through observation or experience. To be precise or explicit about what is, and is not, revealed by observation is not always easy, to be sure. And often it is necessary for our natural powers of observation to be supplemented by the most intricate of mechanical aids for a given object to become "empirical" in the sense just used. That the electron is not as immediately visible as is the mountain range does not mean, obviously, that it is any less empirical. That social behavior does not lend itself to as quick and accurate description as, say, chemical behavior of gases and compounds does not mean that social roles, statuses, and attitudes are any less empirical than molecules and tissues. What is empirical and observable today may have been nonexistent in scientific consciousness a decade ago. Moreover, the empirical is often data ~inferred~ from direct observation. All of this is clear enough, and we should make no pretense that there are not often shadow areas between the empirical and the non empirical. Nevertheless, the first point to make about any science, physical or social, is that its world of data is the empirical world. A very large amount of scientific energy goes merely into the work of expanding the frontiers, through discovery, of the known, observable, empirical world. From observation or discovery we move to ~explanation~. The explanation sought by the scientist is, of course, not at all like the explanation sought by the theologian or metaphysician. The scientist is not interested mnot, that is, in his role of scientist min ultimate, transcendental, or divine causes of what he sets himself to explain. He is interested in explanations that are as empirical as the data themselves. If it is the high incidence of crime in a certain part of a large city that requires explanation, the scientist is obliged to offer his explanation in terms of factors which are empirically real as the phenomenon of crime itself. He does not explain the problem, for example, in terms of references to the will of God, demons, or original sin. A satisfactory explanation is not only one that is empirical, however, but one that can be stated in the terms of a ~causal proposition~. Description is an indispensable point of beginning, but description is not explanation. lt is well to stress this point, for there are all too many scientists, or would-be scientists, who are primarily concerned with data gathering, data counting, and data describing, and who seem to forget that such operations, however useful, are but the first step. Until we have accounted for the problem at hand, explain edit causally by referring the data to some principle or generalization already established, or to some new principle or generalization, we have not explained anything.
Much as an electrical lamp transforms electrical energy into heat and light, the visual "apparatus" of a human being acts as a transformer of light into sight. Light projected from a source or reflected by an object enters the cornea and lens of the eyeball. The energy is transmitted to the retina of the eye whose rods and cones are activated. The stimuli are transferred by nerve cells to the optic nerve and then to the brain. Man is a binocular animal, and the impressions from his two eyes are translated into sight|ma rapid, compound analysis of the shape, form, color, size, position, and motion of the things he sees. Photometry is the science of measuring light. The illuminating engineer and designer employ photometric data constantly in their work. In all fields of application of light and lighting, they predicate their choice of equipment, lamps, wall finishes, colors of light and backgrounds, and other factors affecting the luminous and environmental pattern to be secured, in great part from data supplied originally by a photometric laboratory. Today, extensive tables and charts of photometric data are used widely, constituting the basis for many details of design. Although the lighting designer may not be called upon to do the detailed work of making measurements or plotting data in the form of photometric curves and analyzing them, an understanding of the terms used and their derivation form valuable background knowledge. The perception of color is a complex visual sensation, intimately related to light. The apparent color of an object depends primarily upon four factors: its ability to reflect various colors of light, the nature of the light by which it is seen, the color of its surroundings, and the characteristics and state of adaptation of the eye. In most discussions of color, a distinction is made between white and colored objects. White is the color name most usually applied to a material that diffusely transmits a high percentage of all the hues of light. Colors that have no hue are termed neutral or achromatic colors. They include white, off-white, all shades of gray, down to black. All colored objects selectively absorb certain wave-lengths of light and reflect or transmit others in varying degrees. Inorganic materials, chiefly metals such as copper and brass, reflect light from their ~surfaces~. Hence we have the term "surface" or "metallic" colors, as contrasted with "body" or "pigment" colors. In the former, the light reflected from the surface is often tinted. Most paints, on the other hand, have body or pigment colors. In these, light is reflected from the surface without much color change, but the body material absorbs some colors and reflects others; hence, the diffuse reflection from the body of the material is colored but often appears to be overlaid and diluted with a "white" reflection from the glossy surface of the paint film. In paints and enamels, the pigment particles, which are usually opaque, are suspended in a vehicle such as oil or plastic. The particles of a dye, on the other hand, are considerably finer and may be described as coloring matter in solution. The dye particles are more often transparent or translucent.
In ~Scholasticism and Politics~, written during World War II, Maritain expressed discouragement at the pessimism and lack of self-confidence characteristic of the Western democracies, and in the postwar world he joined enthusiastically in the resurgence of that confidence. While stopping short of asserting that democracy as a political system flowed directly from correct philosophical principles, he nonetheless dismissed Fascism and Communism as inherently irrational. Bourgeois individualism was, however, implicitly immoral and, by breaking down all sense of community and shared moral values, would inevitably end in some form of statism: order imposed from above. In ~Integral Humanism~ (1936) and later works, he developed a systematic critique of the prevailing modern political ideologies and argued that a workable political order, which might appropriately be democracy, depended on a correct understanding of human nature and of natural moral law. Maritain became something of an Americanophile, seeking to counter not only what he regarded as European misconceptions about America but also the Americans' own self-deprecation. In ~Reflections on America~ (1958), he argued that Americans were not really materialistic but were the most idealistic people in the world, although theirs was an idealism often unformed and lacking in philosophical bases. America, he thought, offered perhaps the best contemporary prospect for the emergence of a truly Christian civilization, based not on governmental decree but on the gradual realization of Christian values on the part of a majority of the population. American saints were coming, he predicted. But his postulation of a possible Christian civilization in America did not in any way temper his optimistic political liberalism|ma facet of his thought which caused him to be held in suspicion by some of his fellow Catholics in the 1950s. The Dominican chaplain at Princeton, for example, refused to allow him to address the Catholic students. (One of the exquisite ironies of recent Catholic history was that Maritain in his last books was acerbically critical of secularizing priests, while the Dominican chaplain resigned from the priesthood and ended his days as a real estate salesman in Florida.)No doubt in part because of Raïssa's background, Maritain had an enduring interest in anti-Semitism, which he analyzed and criticized in two books, and he was one of the principal influences in the effort to establish better Jewish-Catholic relations. Racism he regarded as America's most severe flaw. As early as 1958 he was praising Martin Luther King, Jr., and the Chicago neighborhood organizer Saul Alinsky. Maritain and, to a lesser extent, Gilson provided the program for a bold kind of Catholic intellectuality|man appropriation of medieval thought for modern use, not so much a medieval revival as a demonstration of the perennial relevance of the medieval philosophical achievement. The modern mind was to be brought back to its Catholic roots, not by the simple disparagement of modernity or by emphasis on the subjective necessity of faith, but by a rigorous and demanding appeal to reason. In the process, Scholastic principles would be applied in new and often daring ways. In the end the gamble failed. Despite promising signs in the 1940s, secular thinkers did not finally find the Scholastic appeal persuasive. And, as is inevitable when an intellectual community is dominated so thoroughly by a single system of thought, a restiveness was building up in Catholic circles. Although Maritain insisted that Thomism, because of the central importance it gave to the act of existence, was the true existentialism, Catholic intellectuals of the 1950s were attracted to the movement which more usually went by that name; and Gabriel Marcel, a Catholic existentialist of the same generation as Gilson and Maritain, was available to mediate between faith and anguish. Catholic colleges in America were hospitable to existentialist and phenomenological currents at a time when few secular institutions were, and what Catholics sought there was primarily a philosophy which was serious about the metaphysical questions of existence, yet not as rationalistic, rigid, and abstract as Scholasticism often seemed to be.
The economic condition of the low-income regions of the world is one of the great problems of our time. Their progress is important to the high-income countries, not only for humanitarian and political reasons but also because rapid economic growth in the low income countries could make a substantial contribution to the expansion and prosperity of the world economy as a whole. The governments of most high-income countries have in recent years undertaken important aid programs, both bilaterally and multilaterally, and have thus demonstrated their interest in the development of low-income countries. They have also worked within the General Agreement on Tariffs and Trade (GATT) for greater freedom of trade and, recognizing the special problems of low-income countries, have made special trading arrangements to meet their needs. But a faster expansion of trade with high-income countries is necessary if the low-income countries are to enjoy a satisfactory rate of growth. This statement is therefore concerned with the policies of high-income countries toward their trade with low-income countries. Our recommendations are based on the conviction that a better distribution of world resources and a more rational utilization of labor are in the general interest. A liberal policy on the part of high-income countries with respect to their trade with low-income countries will not only be helpful to the low-income countries but, when transitional adjustments have taken place, beneficial to the high-income countries as well. It is necessary to recognize however, that in furthering the development of low-income countries, the high-income countries can play only a supporting role. If development is to be successful, the main effort must necessarily be made by the people of the low-income countries. The high-income countries are, moreover, likely to provide aid and facilitate trade more readily and extensively where the low-income countries are seen to be making sound and determined efforts to help themselves, and thus to be making effective use of their aid and trade opportunities. It is, then, necessary that the low-income countries take full account of the lessons that have been learned from the experience of recent years, if they wish to achieve successful development and benefit from support from high-income countries. Among the most important of these lessons are the following: Severe damage has been done by inflation. A sound financial framework evokes higher domestic savings and investment as well as more aid and investment from abroad. Budgetary and monetary discipline and a more efficient financial and fiscal system help greatly to mobilize funds for investment and thereby decisively influence the rate of growth. Foreign aid should also be efficiently applied to this end. The energies of the people of low-income countries are more likely to be harnessed to the task of economic development where the policies of their governments aim to offer economic opportunity for all and to reduce excessive social inequalities. Development plans have tended to concentrate on industrial investment. The growth of industry depends, however, on concomitant development in agriculture. A steady rise in productivity on the farms, where in almost all low-income countries a majority of the labor force works, is an essential condition of rapid over-all growth. Satisfactory development of agriculture is also necessary to provide an adequate market for an expanding industrial sector and to feed the growing urban population without burdening the balance of payments with heavy food imports. Diminishing surpluses in the high-income countries underline the need for a faster growth of agricultural productivity in low-income countries. Success in this should, moreover, lead to greater trade in agricultural products among the low-income countries themselves as well as to increased exports of some agricultural products to the high-income countries. There can be no doubt about the urgency of the world food problem. Adequate nourishment and a balanced diet are not only necessary for working adults but are crucial for the mental and physical development of growing children. Yet, in a number of low-income countries where the diet is already insufficient the production of food has fallen behind the increase in population. A continuation of this trend must lead to endemic famine. The situation demands strenuous efforts in the low-income countries to improve the production, preservation, and distribution of food so that these countries are better able to feed themselves.
It is indisputable that in order to fulfill its many functions, water should be clean and biologically valuable. The costs connected with the provision of biologically valuable water for food production with the maintenance of sufficiently clean water, therefore, are primarily production costs. Purely "environmental" costs seem to be in this respect only costs connected with the safeguarding of cultural, recreational and sports functions which the water courses and reservoirs fulfill both in nature and inhuman settlements. The pollution problems of the atmosphere resemble those of the water only partly. So far, the supply of air has not been deficient as was the case with water, and the dimensions of the air-shed are so vast that a number of people still hold the opinion that air need not be economized. However, scientific forecasts have shown that the time may be already approaching when clear and biologically valuable air will become problem No. 1. Air being ubiquitous, people are particularly sensitive about any reduction in the quality of the atmosphere, the increased contents of dust and gaseous exhalations, and particularly about the presence of odors. The demand for purity of atmosphere, therefore, emanates much more from the population itself than from the specific sectors of the national economy affected by a polluted or even biologically aggressive atmosphere. The households' share in atmospheric pollution is far bigger than that of industry which, in turn, further complicates the economic problems of atmospheric purity. Some countries have already collected positive experience with the reconstruction of whole urban sectors on the basis of new heating appliances based on the combustion of solid fossil fuels; estimates of the economic consequences of such measures have also been put forward. In contrast to water, where the maintenance of purity would seem primarily to be related to the costs of production and transport, a far higher proportion of the costs of maintaining the purity of the atmosphere derives from environmental considerations. Industrial sources of gaseous and dust emissions are well known and classified; their location can be accurately identified, which makes them controllable. With the exception, perhaps, of the elimination of sulphur dioxide, technical means and technological processes exist which can be used for the elimination of all excessive impurities of the air from the various emissions. Atmospheric pollution caused by the private property of individuals (their dwellings, automobiles, etc.) is difficult to control. Some sources such as motor vehicles are very mobile, and they are thus capable of polluting vast territories. In this particular case, the cost of anti-pollution measures will have to be borne, to a considerable extent, by individuals, whether in the form of direct costs or indirectly in the form of taxes, dues, surcharges, etc. The problem of noise is a typical example of an environmental problem which cannot be solved only passively, i.e., merely by protective measures, but will require the adoption of active measures, i.e., direct interventions at the source. The costs of a complete protection against noise are so prohibitive as to make it unthinkable even in the economically most developed countries. At the same time it would not seem feasible, either economically or politically, to force the population to carry the costs of individual protection against noise, for example, by reinforcing the sound insulation of their homes. A solution of this problem probably cannot be found in the near future.
With Friedrich Engels, Karl Marx in 1848 published the ~Communist Manifesto~, calling upon the masses to rise and throw off their economic chains. His maturer theories of society were later elaborated in his large and abstruse work ~Das Capital~. Starting as a non-violent revolutionist, he ended life as a major social theorist more or less sympathetic with violent revolution, if such became necessary in order to change the social system which he believed to be frankly predatory upon the masses. On the theoretical side, Marx set up the doctrine of surplus value as the chief element in capitalistic exploitation. According to this theory, the ruling classes no longer employed military force primarily as a means to plundering the people. Instead, they used their control over employment and working conditions under the bourgeois capitalistic system for this purpose, paying only a bare subsistence wage to the worker while they appropriated all surplus values in the productive process. He further taught that the strategic disadvantage of the worker in industry prevented him from obtaining a fairer share of the earnings by bargaining methods and drove him to revolutionary procedures as a means to establishing his economic and social rights. This revolution might be peacefully consummated by parliamentary procedures if the people prepared themselves for political action by mastering the materialistic interpretation of history and by organizing politically for the final event. It was his belief that the aggressions of the capitalist class would eventually destroy the middle class and take over all their sources of income by a process of capitalistic absorption of industry|ma process which has failed to occur in most countries. With minor exceptions, Marx's social philosophy is now generally accepted by leftwing labor movements in many countries, but rejected by centrist labor groups, especially those in the United States. In Russia and other Eastern European countries, however, Socialist leaders adopted the methods of violent revolution because of the opposition of the ruling classes. Yet, many now hold that the present Communist regime in Russia and her satellite countries is no longer a proletarian movement based on Marxist social and political theory, but a camouflaged imperialistic effort to dominate the world in the interest of a new ruling class. It is important, however, that those who wish to approach Marx as a teacher should not be "buffaloed'' by his philosophic approach. They are very likely to in these days, because those most interested in propagating the ideas of Marx, the Russian Bolsheviks, have swallowed down his Hegelian philosophy along with his science of revolutionary engineering, and they look upon us irreverent peoples who presume to meditate social and even revolutionary problems without making our obeisance to the mysteries of Dialectic Materialism, as a species of unredeemed and well-nigh unredeemable barbarians. They are right in scorning our ignorance of the scientific ideas of Karl Marx and our indifference to them. They are wrong in scorning our distaste for having practical programs presented in the form of systems of philosophy. In that we simply represent a more progressive intellectual culture than that in which Marx received his education|ma culture farther emerged from the dominance of religious attitudes.
The first and decisive step in the expansion of Europe overseas was the conquest of the Atlantic Ocean. That the nation to achieve this should be Portugal was the logical outcome of her geographical position and her history. Placed on the extreme margin of the old, classical Mediterranean world and facing the untraversed ocean, Portugal could adapt and develop the knowledge and experience of the past to meet the challenge of the unknown. Some centuries of navigating the coastal waters of Western Europe and Northern Africa had prepared Portuguese seamen to appreciate the problems which the Ocean presented and to apply and develop the methods necessary to overcome them. From the seamen of the Mediterranean, particularly those of Genoa and Venice, they had learned the organization and conduct of a mercantile marine, and from Jewish astronomers and Catalan mapmakers the rudiments of navigation. Largely excluded from a share in Mediterranean commerce at a time when her increasing and vigorous population was making heavy demands on her resources, Portugal turned southwards and westwards for opportunities of trade and commerce. At this moment of national destiny it was fortunate for her that in men of the calibre of Prince Henry, known as the Navigator, and King John II she found resolute and dedicated leaders. The problems to be faced were new and complex. The conditions for navigation and commerce in the Mediterranean were relatively simple, compared with those in the western seas. The landlocked Mediterranean, tideless and with a climatic regime of regular and well-defined seasons, presented few obstacles to sailors who were the heirs of a great body of sea lore garnered from the experiences of many centuries. What hazards there were, in the form of sudden storms or dangerous coasts, were known and could be usually anticipated. Similarly the Mediterranean coasts, though they might be for long periods in the hands of dangerous rivals, were described in sailing directions or laid down on the portolan charts drawn by Venetian, Genoese and Catalan cartographers. Problems of determining positions at sea, which confronted the Portuguese, did not arise. Though the Mediterranean seamen by no means restricted themselves to coastal sailing, the latitudinal extent of the Mediterranean was not great, and voyages could be conducted from point to point on compass bearings; the ships were never so far from land as to make it necessary to fix their positions in latitude by astronomical observations. Having made a landfall on a bearing, they could determine their precise position from prominent landmarks, soundings or the nature of the sea bed, after reference to the sailing directions or charts. By contrast, the pioneers of ocean navigation faced much greater difficulties. The western ocean which extended, according to the speculations of the cosmographers, through many degrees of latitude and longitude, was an unknown quantity, but certainly subjected to wide variations of weather and without known bounds. Those who first ventured out over its waters did so without benefit of sailing directions or traditional lore. As the Portuguese sailed southwards, they left behind them the familiar constellations in the heavens by which they could determine direction and the hours of the night, and particularly the pole-star from which by a simple operation they could determine their latitude. Along the unknown coasts they were threatened by shallows, hidden banks, rocks and contrary winds and currents, with no knowledge of convenient shelter to ride out storms or of very necessary watering places. It is little wonder that these pioneers dreaded the thought of being forced on to a lee shore or of having to choose between these inshore dangers and the unrecorded perils of the open sea.
In the past, American colleges and universities were created to serve a dual purpose|m to advance learning and to offer a chance to become familiar with bodies of knowledge already discovered to those who wished it. To create and to impart, these were the hallmarks of American higher education prior to the most recent, tumultuous decades of the twentieth century. The successful institution of higher learning had never been one whose mission could be defined in terms of providing vocational skills or as a strategy for resolving societal problems. In a subtle way Americans believed postsecondary education to be useful, but not necessarily of immediate use. What the student obtained in college became beneficial in later life|m residually, without direct application in the period after graduation. Another purpose has now been assigned to the mission of American colleges and universities. Institutions of higher learning|m public or private|m commonly face the challenge of defining their programs in such a way as to contribute to the service of the community. This service role has various applications. Most common are programs to meet the demands of regional employment markets, to provide opportunities for upward social and economic mobility, to achieve racial, ethnic, or social integration, or more generally to produce "productive" as compared to "educated" graduates. Regardless of its precise definition, the idea of a service-university has won acceptance within the academic community. One need only be reminded of the change in language describing the two-year college to appreciate the new value currently being attached to the concept of a service-related university. The traditional two-year college has shed its pejorative "junior" college label and is generally called a "community" college, a clearly value-laden expression representing the latest commitment in higher education. Even the doctoral degree, long recognized as a required "union card" in the academic world, has come under severe criticism as the pursuit of learning for its own sake and the accumulation of knowledge without immediate application to a professor's classroom duties. The idea of a college or university that performs a triple function|m communicating knowledge to students, expanding the content of various disciplines, and interacting in a direct relationship with society|m has been the most important change in higher education in recent years. This novel development is often overlooked. Educators have always been familiar with those parts of the two-year college curriculum that have a "service" or vocational orientation. Knowing this, otherwise perceptive commentaries on American postsecondary education underplay the impact of the attempt of colleges and universities to relate to, if not resolve, the problems of society. Whether the subject under review is student unrest, faculty tenure, the nature of the curriculum, the onset of collective bargaining, or the growth of collegiate bureaucracies, in each instance the thrust of these discussions obscures the larger meaning of the emergence of the service-university in American higher education. Even the highly regarded critique of Clark Kerr, currently head of the Carnegie Foundation, which set the parameters of academic debate around the evolution of the so-called "multiversity," failed to take account of this phenomenon and the manner in which its fulfillment changed the scope of higher education. To the extent that the idea of "multiversity" centered on matters of scale|mhow big is too big? how complex is too complex?|mit obscured the fundamental question posed by the service-university: what is higher education supposed to do? Unless the commitment to what Samuel Gould has properly called the "communiversity" is clearly articulated, the success of any college or university in achieving its service-education functions will be effectively impaired. . . . The most reliable report about the progress of Open Admissions became available at the end of August, 1974. What the document showed was that the dropout rate for all freshmen admitted in September, 1970, after seven semesters, was about48 percent, a figure that corresponds closely to national averages at similar colleges and universities. The discrepancy between the performance of "regular" students (those who would have been admitted into the four-year colleges with 80% high school averages and into the two-year units with 75%) and Open Admissions freshmen provides a better indication of how the program worked. Taken together the attrition rate (from known and unknown causes) was 48 percent, but the figure for regular students was 36 percent while for Open Admissions categories it was 56 percent. Surprisingly, the statistics indicated that the four-year colleges retained or graduated more of the Open Admissions students than the two-year colleges, a finding that did not reflect experience elsewhere. Not surprisingly, perhaps, the figures indicated a close relationship between academic success defined as retention or graduation and high school averages. Similarly, it took longer for the Open Admissions students to generate college credits and graduate than regular students, a pattern similar to national averages. The most important statistics, however, relate to the findings regarding Open Admissions students, and these indicated as a projection that perhaps as many as 70 percent would not graduate from a unit of the City University.
"The United States seems totally indifferent to our problems," charges French Foreign Minister Claude Cheysson, defending his Government's decision to defy President Reagan and proceed with construction of the Soviet gas pipeline. West German Chancellor Helmut Schmidt endorsed the French action and sounded a similar note. Washington's handling of the pipeline, he said, has "casta shadow over relations" between Europe and the United States," damaging confidence as regards future agreements.'' But it's not just the pipeline that has made a mockery of Versailles. Charges of unfair trade practices and threats of retaliation in a half-dozen industries are flying back and forth over the Atlantic-and the Pacific, too|min a worrisome crescendo. Businessmen, dismayed by the long siege of sluggish economic growth that has left some 30 million people in the West unemployed, are doing what comes naturally: pressuring politicians to restrain imports, subsidize exports, or both. Steelmakers in Bonn and Pittsburgh want help; so do auto makers in London and Detroit, textile, apparel and shoe manufacturers throughout the West and farmers virtually everywhere. Democratic governments, the targets of such pressure, are worried about their own political fortunes and embarrassed by their failure to generate strong growth and lower unemployment. The temptation is strong to take the path of least resistance and tighten up on trade-even for a Government as devoted to the free market as Ronald Reagan's. In the past 18 months, Washington, beset by domestic producers, has raised new barriers against imports in autos, textiles and sugar. Steel is likely to be next. Nor is the United States alone. European countries, to varying degrees, have also sought to defend domestic markets or to promote exports through generous subsidies. . . . The upcoming meeting, to consider trade policy for the 1980's, is surely well timed. "It has been suggested often that world trade policy is 'at a crossroads'|mbut such a characterization of the early 1980's may be reasonably accurate," says C. Fred Bergsten, a former Treasury official in the Carter Administration, now director of a new Washington think tank, the Institute for International Economics. The most urgent question before the leaders of the industrial world is whether they can change the fractious atmosphere of this summer before stronger protective measures are actually put in place. So far, Mr. Bergsten says, words have outweighed deeds. The trade picture is dismal. World trade reached some $2 trillion a year in 1980 and hasn't budged since .In the first half of this year, Mr. Bergsten suspects that trade probably fell as the world economy stayed flat. But, according to his studies, increased protectionism is not the culprit for the slowdown in trade|mat least not yet. The culprit instead is slow growth and recession, and the resulting slump in demand for imports. . . . But there are fresh problems today that could be severely damaging. Though tariffs and outright quotas are low after three rounds of intense international trade negotiations in the past two decades |mnew trade restraints, often bound up in voluntary agreements between countries to limit particular imports, have sprouted in recent years like mushrooms in a wet wood. Though the new protectionism is more subtle than the old-fashioned variety, it is no less damaging to economic efficiency and, ultimately, to prospects for world economic growth. A striking feature is that the new protectionism has focused on the same limited sectors in most of the major industrial countries |mtextiles, steel, electronics, footwear, shipbuilding and autos. Similarly, it has concentrated on supply from Japan and the newly industrialized countries. When several countries try to protect the same industries, the dealings become difficult. Take steel. Since 1977, the European Economic Community has been following a plan to eliminate excess steel capacity, using bilateral import quotas along the way to soften the blow to the steelworkers. The United States, responding to similar pressure at home and to the same problem of a world oversupplied with steel, introduced a "voluntary" quota system in 1969, and, after a brief period of no restraint, developed a complex trigger price mechanism in 1978.
Each spring vast flocks of songbirds migrate north from Mexico to the United States, but since the 1960s their numbers have fallen by up to 50 percent. Frog populations around the world have declined in recent years. The awe-inspiring California condor survives today only because of breeding programs in zoos. Indeed, plant and animal species are disappearing from the earth at an alarming rate, and many scientists believe that human activity is largely responsible. Biodiversity, or the biological variety that thrives in a healthy ecosystem, became the focus of intense international concern during the 1990s. If present trends continue, Harvard University biologist Edward O. Wilson, one of the leading authorities on biodiversity, estimates that the world could lose 20 percent of all existing species by the year 2020. Biodiversity has become such a vogue word that academics have begun to take surveys of scientists to find out what they mean by it. For Adrian Forsyth, director of conservation biology for Conservation International, biodiversity is the totality of biological diversity from the molecular level to the ecosystem level. That includes the distinct species of all living things on Earth. Scientists have identified 1.4 million species, but no one knows how many actually exist, especially in hard-to-reach areas such as the deep heart of a rain forest or the bottom of an ocean. Biologists believe there may be 5 million to 10 million species, though some estimates run as high as 100 million. Habitat destruction as a result of people's use or development of land is considered the leading threat to biodiversity. For example, habitat loss is thought to be causing severe drops in the populations of migratory songbirds in North America, perhaps as much as 50 percent since the 1960s. Scientists studying songbirds that migrate from warm winter quarters in the southern United States, Mexico, and Central America to summer nesting grounds in the northern United States and Canada have found that the birds are losing habitat at both ends of their long journey. In the tropics forests are being cleared for agriculture, and in the north they are being cut down for roads, shopping centers, and housing subdivisions. As a result, bird censuses in the United States have shown a 33 percent decline in the population of rose-breasted grosbeaks since 1980. Another cause of the decline in biodiversity is the introduction of new species. Sometimes a new species is brought to an area intentionally, but sometimes it happens accidentally. In Illinois the native mussel populations in the Illinois River have fallen drastically since the 1993 summer flooding washed large numbers of zebra mussels into the river from Lake Michigan. Zebra mussels, native to the Caspian Sea, were inadvertently introduced to the Great Lakes, probably in the mid-1980s, by oceangoing cargo ships. Pollution is yet another threat to plants and animals. The St. Lawrence River, one habitat of the endangered beluga whale, drains the Great Lakes, historically one of the most industrialized regions in the world. The whales now have such high levels of toxic chemicals stored in their bodies that technically they qualify as hazardous waste under Canadian law. The effects of pollution can be very subtle and hard to prove because often the toxins do not kill animals outright but instead impair their natural defenses against disease or their ability to reproduce. Habitat loss is thought to be one reason for the decline in frog populations worldwide, because frogs live in wetlands, many of which have been filled in over the years for agriculture and development. But researchers theorize that another possible cause is increased exposure to ultraviolet radiation from the Sun as a result of the thinning of the atmosphere's ozone layer; the increased dose of ultraviolet radiation may be suppressing the frogs' immune systems, making them more vulnerable to a wide range of diseases. Of all the causes of species extinction and habitat loss, the one that seems to be at the heart of the matter is the size of the population of just one species, Homo sapiens. In 1994 the world population was estimated at more than 5.6 billion, more than double the number in 1950. With a larger population come increased demands for food, clothing, housing, and energy, all of which will likely lead to greater habitat destruction, more pollution, and less biological diversity. The number of people in the world continues to grow, but there is evidence that the population of the industrialized nations has more or less stabilized. That's important because although the population of these countries makes up only 25 percent of the world total, the developed world consumes 75 percent of the world's resources. The United Nations is treating the increase in the world's population as a serious matter. A 1994 UN-sponsored conference on population produced a 113-page plan to stabilize the number of people in the world at 7.27 billion by 2015. Otherwise, the UN feared, world population could mushroom to 12.5 billion by 2050.
Although new and effective AIDS drugs have brought hope to many HIV-infected persons, a number of social and ethical dilemmas still confront researchers and public-health officials. The latest combination drug therapies are far too expensive for infected persons in the developing world—particularly in sub-Saharan Africa, where the majority of AIDS deaths have occurred. In these regions, where the incidence of HIV infection continues to soar, the lack of access to drugs can be catastrophic. In 1998, responding to an international outcry, several pharmaceutical firms announced that they would slash the price of AIDS drugs in developing nations by as much as 75 percent. However, some countries argued that drug firms had failed to deliver on their promises of less expensive drugs. In South Africa government officials developed legislation that would enable the country to override the patent rights of drug firms by importing cheaper generic medicines made in India and Thailand to treat HIV infection. In 1998, 39 pharmaceutical companies sued the South African government on the grounds that the legislation violated international trade agreements. Pharmaceutical companies eventually dropped their legal efforts in April 2001, conceding that South Africa’s legislation did comply with international trading laws. The end of the legal battle was expected to pave the way for other developing countries to gain access to more affordable AIDS drugs. AIDS research in the developing world has raised ethical questions pertaining to the clinical testing of new therapies and potential vaccines. For example, controversy erupted over 1997 clinical trials that tested a shorter course of Zidovudine (or AZT) therapy in HIV-infected pregnant women in developing countries. Earlier studies had shown that administering AZT to pregnant women for up to six months prior to birth could cut mother-to-child transmission of HIV by up to two-thirds. The treatment’s $800 cost, however, made it too expensive for patients in developing nations. The controversial 1997 clinical trials, which were conducted in Thailand and other regions in Asia and Africa, tested a shorter course of AZT treatment, costing only $50. Some pregnant women received AZT, while others received a placebo—a medically inactive substance often used in drug trials to help scientists determine the effectiveness of the drug under study. Ultimately the shorter course of AZT treatment proved to be successful and is now standard practice in a growing number of developing nations. However, at the time of the trials, critics charged that using a placebo on HIV-infected pregnant women—when AZT had already been shown to prevent mother-to-child transmission—was unethical and needlessly placed babies at fatal risk. Defenders of the studies countered that a placebo was necessary to accurately gauge the effectiveness of the AZT short-course treatment. Some critics speculated whether such a trial, while apparently acceptable in the developing nations of Asia and Africa, would ever have been viewed as ethical, or even permissible, in a developed nation like the United States. Similar ethical questions surround the testing of AIDS vaccines in developing nations. Vaccines typically use weakened or killed HIV to spark antibody production. In some vaccines, these weakened or killed viruses have the potential to cause infection and disease. Critics questioned whether it is ethical to place all the risk on test subjects in developing regions such as sub-Saharan Africa, where a person infected by a vaccine would have little or no access to medical care. At the same time, with AIDS causing up to 5,500 deaths a day in Africa, others feel that developing nations must pursue any medical avenue for stemming the epidemic and protecting people from the virus. For the struggling economies of some developing nations, AIDS has brought yet another burden: AIDS tends to kill young adults in the prime of their lives—the primary breadwinners and caregivers in families. According to figures released by the United Nations in 1999, AIDS has shortened the life expectancy in some African nations by an average of seven years. In Zimbabwe, life expectancy has dropped from 61 years in 1993 to 49 in 1999. The next few decades may see it fall as low as 41 years. Upwards of 11 million children have been orphaned by the AIDS epidemic. Those children who survive face a lack of income, a higher risk of malnutrition and disease, and the breakdown of family structure. In Africa, the disease has had a heavy impact on urban professionals—educated, skilled workers who play a critical role in the labor force of industries such as agriculture, education, transportation, and government. The decline in the skilled workforce has already damaged economic growth in Africa, and economists warn of disastrous consequences in the future. The social, ethical, and economic effects of the AIDS epidemic are still being played out, and no one is certain what the consequences will be. Despite the many grim facts of the AIDS epidemic, however, humanity is armed with proven, effective weapons against the disease: knowledge, education, prevention, and the ever-growing store of information about the virus’s actions.
The late 1980s found the landscape of popular music in America dominated by a distinctive style of rock and roll known as glam rock or hair metal—so called because of the over-styled hair, makeup, and wardrobe worn by the genre’s ostentatious rockers. Bands like Poison, White snake, and Mötley Crüe popularized glam rock with their power ballads and flashy style, but the product had worn thin by the early 1990s. Just as superficial as the 80s,glam rockers were shallow, short on substance, and musically inferior. In 1991, a Seattle-based band called Nirvana shocked the corporate music industry with the release of its debut single, “Smells Like Teen Spirit,” which quickly became a huge hit all over the world. Nirvana’s distorted, guitar-laden sound and thought-provoking lyrics were the antithesis of glam rock, and the youth of America were quick to pledge their allegiance to the brand-new movement known as grunge. Grunge actually got its start in the Pacific Northwest during the mid-1980s. Nirvana had simply mainstreamed a sound and culture that got its start years before with bands like Mudhoney, Soundgarden, and Green River. Grunge rockers derived their fashion sense from the youth culture of the Pacific Northwest: a melding of punk rock style and outdoors clothing like flannels, heavy boots, worn out jeans, and corduroys. At the height of the movement’s popularity, when other Seattle bands like Pearl Jam and Alice in Chains were all the rage, the trappings of grunge were working their way to the height of American fashion. Like the music, the teenagers were fast to embrace the grunge fashion because it represented defiance against corporate America and shallow pop culture. The popularity of grunge music was ephemeral; by the mid- to late-1990s, its influence upon American culture had all but disappeared, and most of its recognizable bands were nowhere to be seen on the charts. The heavy sound and themes of grunge were replaced on the radio waves by boy bands like the Backstreet Boys, and the bubble gum pop of Britney Spears and Christina Aguilera. There are many reasons why the Seattle sound faded out of the mainstream as quickly as it rocketed to prominence, but the most glaring reason lies at the defiant, anti-establishment heart of the grunge movement itself. It is very hard to buck the trend when you are the one setting it, and many of the grunge bands were never comfortable with the fame that was thrust upon them. Ultimately, the simple fact that many grunge bands were so against mainstream rock stardom eventually took the movement back to where it started: underground. The fickle American mainstream public, as quick as they were to hop on to the grunge bandwagon, were just as quick to hop off and move on to something else.
Solar storms are natural events that occur when high energy particles from the sun hit the earth. They take place when the sun releases energy in the form of outbursts or eruptions. Such outbursts are also called solar flares. Energy is set free and transported to outer space.
Solar storms contain gas and other matter and can travel at extremely high speeds. When such particles hit the Earth or any other planet with an atmosphere they cause a geomagnetic storm - a disturbance in the magnetic field that surrounds our planet. Normally such outbursts are not dangerous. They are the cause of polar lights - bright, colorful lights in the skies in the northern regions. They may, however, endanger us in other ways. Such outbursts of the sun’s energy can cause communication problems, interfere with satellite reception or lead to incorrect GPS readings. In the past they have even shut down electric power grids. The most damaging events happened in the 19th century when solar storms started fires in North America and Europe. They caused auroras as far south as the equator. Luckily the world did not have the high technological standard we have today. Such forceful eruptions could do much more damage today. An American investigation in 2008 showed that extreme solar storms could cause billions of dollars in damage. Several organizations around the world monitor the sun’s activity and the disturbances that occur in its atmosphere. They also have detectors that show variations in the Earth’s magnetic field. Solar cycles repeat themselves every 11 years. Right now the Earth is experiencing the most severe solar storm since 2003. Sky watchers in Canada and Scandinavia are already reporting sightings of more northern lights than usual. As the sun is currently becoming more active we will see more and more solar flares the next few years. However the solar cycle we are in at the moment is relatively quiet compared to others over the last decades. The last major problems caused by solar storms occurred in 1994 when communications satellites over Canada malfunctioned and power in many parts of the country went out for a few hours. When solar storms pass through the earth’s atmosphere radiation levels are higher for a few days Airlines are especially worried about these outbursts of radiation because long distance flights use polar routes, an area where disruptions are most severe. During such storms there are periods when the crew cannot communicate with ground control stations. Astronauts orbiting the earth in the International Space Station may also be in danger because radiation levels are much higher than normal. Outbursts of solar energy even affect animals which are sensitive to changes in Earth’s magnetic field. During such events they lose orientation and get lost.
Although the overall situation of women has improved in the past decades they still are discriminated against when it comes to work. They get paid less than men for the same work that they do and in some cases do not have the same opportunities as men to reach high-ranking positions. However this is starting to change. Especially organizations, like the United Nations or UNESCO are giving women better opportunities. Many European Union countries have introduced quotas for women in high-ranking positions. But in other areas they are still second-class citizens In industrial countries of the developed world they have become more than equal to males. In the past four decades the proportion of women who have paid jobs has gone up from below half to 64 %. There are, however, differences from country to country. While in Scandinavian countries almost three quarters of all women have a job, the number of females on the labor force in southern and eastern Europe is only about 50%. The role of women has changed drastically during the 20th century. In the early 1900s female workers were employed mainly in factories or worked as servants. In the course of time they got more educated and started working as nurses, teachers, even doctors and lawyers In the 1960s, women, for the first time, were able to actively plan their families. Birth control pills and other contraceptives made it possible for women to have a career, family or even both. Many went to high school and college and sought a job. In the 1970s women in developed countries started to become a major part of the workforce More females in the workforce have brought along many advantages for industries and employers. They have a wider variety of workers to choose from and women often have better ideas and make positive contributions to how things are done. Additional workers also help the economy thrive. They spend money and contribute to the growth of national income. In many countries they provide extra income for a country whose population is getting older and older. In America, economists think that the GDP is 25% larger than it would be without women on the workforce. According to a new survey about one billion women are expected to enter the workforce in the next decade. This should not only contribute to economic growth but also improve gender equality. Even though women should be treated equally, they still get, on average, about 18% less pay for the same work. Females suffer from inequalities in other areas too. Many women wish to start a career and search for fulfillment outside family life. However, in most cases it is harder for them to get to the absolute top than it is for men. Only about 3% of the top CEOs are women. While the situation of women in developed countries may have come to a standstill, females in Asian countries, like China, Singapore or South Korea are experiencing a boom in good job offers. More and more of them are reaching top positions. One of the issues that still are hard for a woman to manage is child care. Not only do they spend more on education and baby sitters, especially single mothers who raise a child alone find it nearly impossible to reach a top position at the same time. Even if a woman has a working husband, men are not keen on taking leave to care for the baby. Most men still consider this a woman’s job Nevertheless, there are countries where women and men lead equal lives and also find equal opportunity. Among Scandinavian countries, which generally offer many opportunities for women, Iceland ranks first. The United States is currently in 19th place, up from the 31st spot, mainly because President Obama has offered women more jobs in government offices. At the bottom of the list are developing countries like Yemen and Pakistan.
Rice is one of the world’s most important food crops . It is a grain, like wheat and corn. Almost all the people who depend on rice for their food live in Asia. Young rice plants are bright green. After planting, the grain is ripe about 120 to 180 days later. It turns golden yellow during the time of harvest . In some tropical countries rice can be harvested up to three times a year. Each rice plant carries hundreds or thousands of kernels . A typical rice kernel is 6—10 mm long and has four parts: The hull is the hard outer part which is not good to eat. The bran layers protect the inner parts of the kernel They have vitamins and minerals in them. The endosperm makes up most of the kernel. It has a lot of starch in it. The embryo is a small part from which a new rice plant can grow. Rice grows best in tropical regions. It needs a lot of water and high temperatures. It grows on heavy, muddy soils that can hold water. In many cases farmers grow rice in paddies. These are fields that have dirt walls around them to keep the water inside. The fields are flooded with water and seeds or small rice plants are placed into the muddy soil . In southeast Asia and other developing countries farmers do most of the work by hand. They use oxen or water buffaloes to pull the ploughs . In industrialized countries work is done mostly by machines. Two or three weeks before the harvest begins water is pumped out of the fields. The rice is cut and the kernels are separated from the rest of the plant. The wet kernels are laid on mats to dry in the sun. better. Sometimes brown rice , in which the bran layers remain, is produced . Then it is packaged and sold. Rice gives your body energy in the form of carbohydrates . It also has vitamin B and other minerals in it. Rice has little fat and is easy to digest . Rice is in many other foods as well. It is in breakfast cereals , frozen and baby foods and soup. Breweries use rice to make beer. In Japan , rice kernels are used to make an alcoholic drink Most rice is grown in lowland regions but about one fifth of the world’s rice is upland rice , which grows on terraces in the mountains. The world’s farmers grow more than 700 million tons a year. 90 % of the rice production comes from Asia. China and India are the world’s biggest producers. In these countries rice is planted in the big river plains of the Ganges and Yangtze. Almost all of Asia’s rice is eaten by the population there. Sometimes they don’t even have enough to feed their own people. Other counties , like the USA, produce rice for export.
In the past thirty years, Americans’ consumption of restaurant and take-out food has doubled. The result, according to many health watchdog groups, is an increase in overweight and obesity. Almost 60 million Americans are obese, costing $117 billion each year in health care and related costs. Members of Congress have decided they need to do something about the obesity epidemic. A bill was recently introduced in the House that would require restaurants with twenty or more locations to list the nutritional content of their food on their menus. A Senate version of the bill is expected in the near future. Our legislators point to the trend of restaurants’ marketing larger meals at attractive prices. People order these meals believing that they are getting a great value, but what they are also getting could be, in one meal, more than the daily recommended allowances of calories, fat, and sodium. The question is, would people stop “supersizing,” or make other healthier choices if they knew the nutritional content of the food they’re ordering? Lawmakers think they would, and the gravity of the obesity problem has caused them to act to change menus. The Menu Education and Labeling, or MEAL, Act, would result in menus that look like the nutrition facts panels found on food in supermarkets. Those panels are required by the 1990 Nutrition Labeling and Education Act, which exempted restaurants. The new restaurant menus would list calories, fat, and sodium on printed menus, and calories on menu boards, for all items that are offered on a regular basis (daily specials don’t apply). But isn’t this simply asking restaurants to state the obvious? Who isn’t aware that an order of supersize fries isn’t health food? Does anyone order a double cheeseburger thinking they’re being virtuous? Studies have shown that it’s not that simple. In one, registered dieticians couldn’t come up with accurate estimates of the calories found in certain fast foods. Who would have guessed that a milk shake, which sounds pretty healthy (it does contain milk, after all) has more calories than three McDonald’s cheeseburgers? Or that one chain’s chicken breast sandwich, another better-sounding alternative to a burger, contains more than half a day’s calories and twice the recommended daily amount of sodium? Even a fast-food coffee drink, without a doughnut to go with it, has almost half the calories needed in a day. The restaurant industry isn’t happy about the new bill. Arguments against it include the fact that diet alone is not the reason for America’s obesity epidemic. A lack of adequate exercise is also to blame. In addition, many fast food chains already post nutritional information on their websites, or on posters located in their restaurants. Those who favor the MEAL Act, and similar legislation, say in response that we must do all we can to help people maintain a healthy weight. While the importance of exercise is undeniable, the quantity and quality of what we eat must be changed. They believe that if we want consumers to make better choices when they eat out, nutritional information must be provided where they are selecting their food. Restaurant patrons are not likely to have memorized the calorie counts they may have looked up on the Internet, nor are they going to leave their tables, or a line, to check out a poster that might be on the opposite side of the restaurant.
In 1904, the U.S. Patent Office granted a patent for a board game called “The Landlord’s Game,” which was invented by a Virginia Quaker named Lizzie Magie. Magie was a follower of Henry George, who started a tax movement that supported the theory that the renting of land and real estate produced an unearned increase in land values that profited a few individuals (landlords) rather than the majority of the people (tenants). George proposed a single federal tax based on land ownership; he believed this tax would weaken the ability to form monopolies, encourage equal opportunity, and narrow the gap between rich and poor. Lizzie Magie wanted to spread the word about George’s proposal, making it more understandable to a majority of people who were basically unfamiliar with economics. As a result, she invented a board game that would serve as a teaching device. The Landlord’s Game was intended to explain the evils of monopolies, showing that they repressed the possibility for equal opportunity. Her instructions read in part: “The object of this game is not only to afford amusement to players, but to illustrate to them how, under the present or prevailing system of land tenure, the landlord has an advantage over other enterprisers, and also how the single tax would discourage speculation.” The board for the game was painted with forty spaces around its perimeter, including four railroads, two utilities, twenty-two rental properties, and a jail. There were other squares directing players to go to jail, pay a luxury tax, and park. All properties were available for rent, rather than purchase. Magie’s invention became very popular, spreading through word of mouth, and altering slightly as it did. Since it was not manufactured by Magie, the boards and game pieces were homemade. Rules were explained and transmuted, from one group of friends to another. There is evidence to suggest that The Landlord’s Game was played at Princeton, Harvard, and the University of Pennsylvania. In 1924, Magie approached George Parker (President of Parker Brothers) to see if he was interested in purchasing the rights to her game. Parker turned her down, saying that it was too political. The game increased in popularity, migrating north to New York state, west to Michigan, and as far south as Texas. By the early 1930s, it reached Charles Darrow in Philadelphia. In 1935, claiming to be the inventor, Darrow got a patent for the game, and approached Parker Brothers. This time, the company loved it, swallowed Darrow’s prevarication, and not only purchased his patent, but paid him royalties for every game sold. The game quickly became Parker Brothers’ bestseller, and made the company, and Darrow, millions of dollars. When Parker Brothers found out that Darrow was not the true inventor of the game, they wanted to protect their rights to the successful game, so they went back to Lizzie Magie, now Mrs. Elizabeth Magie Phillips of Clarendon, Virginia. She agreed to a payment of $500 for her patent, with no royalties, so she could stay true to the original intent of her game’s invention. She therefore required in return that Parker Brothers manufacture and market The Landlord’s Game in addition to Monopoly. However, only a few hundred games were ever produced. Monopoly went on to become the world’s bestselling board game, with an objective that is the exact opposite of the one Magie intended: “The idea of the game is to buy and rent or sell property so profitably that one becomes the wealthiest player and eventually monopolist. The game is one of shrewd and amusing trading and excitement.”