Veoma teški tekstovi

 Milankovitch proposed in the early twentieth century that the ice ages were caused by variations in the Earth’s orbit around the Sun. For sometime this theory was considered untestable, largely because there was no sufficiently precise chronology of the ice ages with which the orbital variations could be matched. To establish such a chronology it is necessary to determine the relative amounts of land ice that existed at various times in the Earth’s past. A recent discovery makes such a determination possible: relative land-ice volume for a given period can be deduced from the ratio of two oxygen isotopes, 16 and 18, found in ocean sediments. Almost all the oxygen in water is oxygen 16, but a few molecules out of every thousand incorporate the heavier isotope 18. When an ice age begins, the   continental ice sheets grow, steadily reducing the amount of water evaporated from the ocean that will eventually return to it. Because heavier isotopes tend to be left behind when water evaporates from the ocean surfaces, the remaining ocean water becomes progressively enriched in oxygen 18. The degree of enrichment can be determined by analyzing ocean sediments of the period, because these sediments are composed of calcium carbonate shells of marine organisms, shells that were  constructed with oxygen atoms drawn from the sur- rounding ocean. The higher the ratio of oxygen 18 to oxygen 16 in a sedimentary specimen, the more land ice there was when the sediment was laid down.  As an indicator of shifts in the Earth’s climate, the  isotope record has two advantages. First, it is a global record: there is remarkably little variation in isotope ratios in sedimentary specimens taken from different continental locations. Second, it is a more continuous record than that taken from rocks on land. Because of these advantages, sedimentary evidence can be dated with sufficient accuracy by radiometric methods to establish a precise chronology of the ice ages. The dated isotope record shows that the fluctuations in global ice volume over the past several hundred thousand years  have a pattern: an ice age occurs roughly once every 100,000 years. These data have established a strong connection between variations in the Earth’s orbit and the periodicity of the ice ages. However, it is important to note that other factors, such as volcanic particulates or variations in the amount of sunlight received by the Earth, could potentially have affected the climate. The advantage of the Milankovitch theory is that it is testable: changes in the Earth’s orbit can be calculated and dated by applying Newton’s laws of gravity to progressively earlier configurations of the bodies in the solar system. Yet the lack of information about other possible factors affecting global climate does not make them unimportant. 





For millennia, the coconut has been central to the lives of Polynesian and Asian peoples. In the western world, on the other hand, coconuts have always been exotic and unusual, sometimes rare. The Italian merchant traveller Marco Polo apparently saw coconuts in South Asia in the late 13th century, and among the mid-14th-century travel writings of Sir John Mandeville there is mention of ‘great Notes of Ynde’ (great Nuts of India). Today, images of palm-fringed tropical beaches are cliches in the west to sell holidays, chocolate bars fizzy drinks and even romance.Typically, we envisage coconuts as brown cannonballs that, when opened, provide sweet white flesh. But we see only part of the fruit and none of the plant from which they come. The coconut palm has a smooth, slender, grey trunk, up to 30 metres tall. This is an important source of timber for building houses, and is increasingly being used as a replacement for endangered hardwoods in the furniture construction industry. The trunk is surmounted by a rosette of leaves, each of which may be up to six metres long. The leaves have hard veins in their centres which, in many parts of the world, are used as brushes after the green part of the leaf has been stripped away. Immature coconut flowers are tightly clustered together among the leaves at the top of the trunk. The flower stems may be tapped for their sap to produce a drink, and the sap can also be reduced by boiling to produce a type of sugar used for cooking.Coconut palms produce as many as seventy fruits per year, weighing more than a kilogram each. The wall of the fruit has three layers: a waterproof outer layer, a fibrous middle layer and a hard, inner layer. The thick fibrous middle layer produces coconut fibre, ‘coir’, which has numerous uses and is particularly important in manufacturing ropes. The woody innermost layer, the shell, with its three prominent ‘eyes’, surrounds the seed. An important product obtained from the shell is charcoal, which is widely used in various industries as well as in the home as a cooking fuel. When broken in half, the shells are also used as bowls in many parts of Asia.Inside the shell are the nutrients (endosperm) needed by the developing seed. Initially, the endosperm is a sweetish liquid, coconut water, which is enjoyed as a drink, but also provides the hormones which encourage other plants to grow more rapidly and produce higher yields. As the fruit matures, the coconut water gradually solidifies to form the brilliant white, fat-rich, edible flesh or meat. Dried coconut flesh, ‘copra’, is made into coconut oil and coconut milk, which are widely used in cooking in different parts of the world, as well as in cosmetics. A derivative of coconut fat, glycerine, acquired strategic importance in a quite different sphere, as Alfred Nobel introduced the world to his nitroglycerine-based invention: dynamite.Their biology would appear to make coconuts the great maritime voyagers and coastal colonizers of the plant world. The large, energy-rich fruits are able to float in water and tolerate salt, but cannot remain viable indefinitely; studies suggest after about 110 days at sea they are no longer able to germinate. Literally cast onto desert island shores, with little more than sand to grow in and exposed to the full glare of the tropical sun, coconut seeds are able to germinate and root. The air pocket in the seed, created as the endosperm solidifies, protects the embryo. In addition, the fibrous fruit wall that helped it to float during the voyage stores moisture that can be taken up by the roots of the coconut seedling as it starts to grow.There have been centuries of academic debate over the origins of the coconut. There were no coconut palms in West Africa, the Caribbean or the east coast of the Americas before the voyages of the European explorers Vasco da Gama and Columbus in the late 15th and early 16th centuries. 16th century trade and human migration patterns reveal that Arab traders and European sailors are likely to have moved coconuts from South and Southeast Asia to Africa and then across the Atlantic to the east coast of America. But the origin of coconuts discovered along the west coast of America by 16th century sailors has been the subject of centuries of discussion. Two diametrically opposed origins have been proposed: that they came from Asia, or that they were native to America. Both suggestions have problems. In Asia, there is a large degree of coconut diversity and evidence of millennia of human use – but there are no relatives growing in the wild. In America, there are close coconut relatives, but no evidence that coconuts are indigenous. These problems have led to the intriguing suggestion that coconuts originated on coral islands in the Pacific and were dispersed from there.



Changes in reading habitsLook around on your next plane trip. The iPad is the new pacifier for babies and toddlers. Younger school-aged children read stories on smartphones; older kids don’t read at all, but hunch over video games. Parents and other passengers read on tablets or skim a flotilla of email and news feeds. Unbeknown to most of us, an invisible, game-changing transformation links everyone in this picture: the neuronal circuit that underlies the brain’s ability to read is subtly, rapidly changing and this has implications for everyone from the pre-reading toddler to the expert adult.As work in neurosciences indicates, the acquisition of literacy necessitated a new circuit in our species’ brain more than 6,000 years ago. That circuit evolved from a very simple mechanism for decoding basic information, like the number of goats in one’s herd, to the present, highly elaborated reading brain. My research depicts how the present reading brain enables the development of some of our most important intellectual and affective processes: internalized knowledge, analogical reasoning, and inference; perspective-taking and empathy; critical analysis and the generation of insight. Research surfacing in many parts of the world now cautions that each of these essential ‘deep reading’ processes may be under threat as we move into digital- based modes of reading.This is not a simple, binary issue of print versus digital reading and technological innovation. As MIT scholar Sherry Turkle has written, we do not err as a society when we innovate but when we ignore what we disrupt or diminish while innovating. In this hinge moment between print and digital cultures, society needs to confront what is diminishing in the expert reading circuit, what our children and older students are not developing, and what we can do about it.We know from research that the reading circuit is not given to human beings through a genetic blueprint like vision or language; it needs an environment to develop. Further, it will adapt to that environment’s requirements – from different writing systems to the characteristics of whatever medium is used. If the dominant medium advantages processes that are fast, multi-task oriented and well-suited for large volumes of information, like the current digital medium, so will the reading circuit. As UCLA psychologist Patricia Greenfield writes, the result is that less attention and time will be allocated to slower, time-demanding deep reading processes.Increasing reports from educators and from researchers in psychology and the humanities bear this out. English literature scholar and teacher Mark Edmundson describes how many college students actively avoid the classic literature of the 19th and 20th centuries in favour of something simpler as they no longer have the patience to read longer, denser, more difficult texts. We should be less concerned with students’ ‘cognitive impatience’, however, than by what may underlie it: the potential inability of large numbers of students to read with a level of critical analysis sufficient to comprehend the complexity of thought and argument found in more demanding texts.Multiple studies show that digital screen use may be causing a variety of troubling downstream effects on reading comprehension in older high school and college students. In Stavanger, Norway, psychologist Anne Mangen and her colleagues studied how high school students comprehend the same material in different mediums. Mangen’s group asked subjects questions about a short story whose plot had universal student appeal; half of the students read the story on a tablet, the other half in paperback. Results indicated that students who read on print were superior in their comprehension to screen-reading peers, particularly in their ability to sequence detail and reconstruct the plot in chronological order.Ziming Liu from San Jose State University has conducted a series of studies which indicate that the ‘new norm’ in reading is skimming, involving word-spotting and browsing through the text. Many readers now use a pattern when reading in which they sample the first line and then word- spot through the rest of the text. When the reading brain skims like this, it reduces time allocated to deep reading processes. In other words, we don’t have time to grasp complexity, to understand another’s feelings, to perceive beauty, and to create thoughts of the reader’s own.The possibility that critical analysis, empathy and other deep reading processes could become the unintended ‘collateral damage’ of our digital culture is not a straightforward binary issue about print versus digital reading. It is about how we all have begun to read on various mediums and how that changes not only what we read, but also the purposes for which we read. Nor is it only about the young. The subtle atrophy of critical analysis and empathy affects us all equally. It affects our ability to navigate a constant bombardment of information. It incentivizes a retreat to the most familiar stores of unchecked information, which require and receive no analysis, leaving us susceptible to false information and irrational ideas.There’s an old rule in neuroscience that does not alter with age: use it or lose it. It is a very hopeful principle when applied to critical thought in the reading brain because it implies choice. The story of the changing reading brain is hardly finished. We possess both the science and the technology to identify and redress the changes in how we read before they become entrenched. If we work to understand exactly what we will lose, alongside the extraordinary new capacities that the digital world has brought us, there is as much reason for excitement as caution.


Attitudes towards Artificial IntelligenceA Artificial intelligence (AI) can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences. Many decisions in our lives require a good forecast, and AI is almost always better at forecasting than we are. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong. If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.B Take the case of Watson for Oncology, one of technology giant IBM’s supercomputer programs. Their attempt to promote this program to cancer doctors was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. But when doctors first interacted with Watson, they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much point in Watson’s recommendations. The supercomputer was simply telling them what they already knew, and these recommendations did not change the actual treatment. On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more suspicion and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.C This is just one example of people’s lack of confidence in AI and their reluctance to accept what AI has to offer. Trust in other people is often based on our understanding of how others think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to comprehend. And interacting with something we don’t understand can cause anxiety and give us a sense that we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes wrong. Embarrassing AI failures receive a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.D Feelings about AI run deep. In a recent experiment, people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life. It was found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded. This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as “confirmation bias”. As AI is represented more and more in media and entertainment, it could lead to a society split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.E Fortunately, we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s opinions about the technology, as was found in the study mentioned above. Evidence also suggests the more you use other technologies such as the internet, the more you trust them. Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve. Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures. A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.F Research suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience. For example, one study showed that when people were allowed the freedom to slightly modify an algorithm, they felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future. We don’t need to understand the intricate inner workings of AI systems, but if people are given a degree of responsibility for how they are implemented, they will be more willing to accept AI into their lives.

 

How to make wise decisions

Across cultures, wisdom has been considered one of the most revered human qualities. Although the truly wise may seem few and far between, empirical research examining wisdom suggests that it isn’t an exceptional trait possessed by a small handful of bearded philosophers after all – in fact, the latest studies suggest that most of us have the ability to make wise decisions, given the right context.‘It appears that experiential, situational, and cultural factors are even more powerful in shaping wisdom than previously imagined,’ says Associate Professor Igor Grossmann of the University of Waterloo in Ontario, Canada. ‘Recent empirical findings from cognitive, developmental, social, and personality psychology cumulatively suggest that people’s ability to reason wisely varies dramatically across experiential and situational contexts. Understanding the role of such contextual factors offers unique insights into understanding wisdom in daily life, as well as how it can be enhanced and taught.’It seems that it’s not so much that some people simply possess wisdom and others lack it, but that our ability to reason wisely depends on a variety of external factors. ‘It is impossible to characterize thought processes attributed to wisdom without considering the role of contextual factors,’ explains Grossmann. ‘In other words, wisdom is not solely an “inner quality” but rather unfolds as a function of situations people happen to be in. Some situations are more likely to promote wisdom than others.’Coming up with a definition of wisdom is challenging, but Grossmann and his colleagues have identified four key characteristics as part of a framework of wise reasoning. One is intellectual humility or recognition of the limits of our own knowledge, and another is appreciation of perspectives wider than the issue at hand. Sensitivity to the possibility of change in social relations is also key, along with compromise or integration of different attitudes and beliefs.Grossmann and his colleagues have also found that one of the most reliable ways to support wisdom in our own day-to-day decisions is to look at scenarios from a third-party perspective, as though giving advice to a friend. Research suggests that when adopting a first-person viewpoint we focus on ‘the focal features of the environment’ and when we adopt a third-person, ‘observer’ viewpoint we reason more broadly and focus more on interpersonal and moral ideals such as justice and impartiality. Looking at problems from this more expansive viewpoint appears to foster cognitive processes related to wise decisions.What are we to do, then, when confronted with situations like a disagreement with a spouse or negotiating a contract at work, that require us to take a personal stake? Grossmann argues that even when we aren’t able to change the situation, we can still evaluate these experiences from different perspectives.For example, in one experiment that took place during the peak of a recent economic recession, graduating college seniors were asked to reflect on their job prospects. The students were instructed to imagine their career either ‘as if you were a distant observer’ or ‘before your own eyes as if you were right there’. Participants in the group assigned to the ‘distant observer’ role displayed more wisdom-related reasoning (intellectual humility and recognition of change) than did participants in the control group.In another study, couples in long-term romantic relationships were instructed to visualize an unresolved relationship conflict either through the eyes of an outsider or from their own perspective. Participants then discussed the incident with their partner for 10 minutes, after which they wrote down their thoughts about it. Couples in the ‘other’s eyes’ condition were significantly more likely to rely on wise reasoning – recognizing others’ perspectives and searching for a compromise – compared to the couples in the egocentric condition.‘Ego-decentering promotes greater focus on others and enables a bigger picture, conceptual view of the experience, affording recognition of intellectual humility and change,’ says Grossmann.We might associate wisdom with intelligence or particular personality traits, but research shows only a small positive relationship between wise thinking and crystallized intelligence and the personality traits of openness and agreeableness. ‘It is remarkable how much people can vary in their wisdom from one situation to the next, and how much stronger such contextual effects are for understanding the relationship between wise judgment and its social and affective outcomes as compared to the generalized “traits”,’ Grossmann explains. ‘That is, knowing how wisely a person behaves in a given situation is more informative for understanding their emotions or likelihood to forgive [or] retaliate as compared to knowing whether the person may be wise “in general”.’


The White Horse of Uffington

The cutting of huge figures or ‘geoglyphs’ into the earth of English hillsides has taken place for more than 3,000 years. There are 56 hill figures scattered around England, with the vast majority on the chalk downlands of the country’s southern counties. The figures include giants, horses, crosses and regimental badges. Although the majority of these geoglyphs date within the last 300 years or so, there are one or two that are much older.The most famous of these figures is perhaps also the most mysterious – the Uffington White Horse in Oxfordshire. The White Horse has recently been re-dated and shown to be even older than its previously assigned ancient pre-Roman Iron Age date. More controversial is the date of the enigmatic Long Man of Wilmington in Sussex. While many historians are convinced the figure is prehistoric, others believe that it was the work of an artistic monk from a nearby priory and was created between the 11th and 15th centuries.The method of cutting these huge figures was simply to remove the overlying grass to reveal the gleaming white chalk below. However, the grass would soon grow over the geoglyph again unless it was regularly cleaned or scoured by a fairly large team of people. One reason that the vast majority of hill figures have disappeared is that when the traditions associated with the figures faded, people no longer bothered or remembered to clear away the grass to expose the chalk outline. Furthermore, over hundreds of years the outlines would sometimes change due to people not always cutting in exactly the same place, thus creating a different shape to the original geoglyph. The fact that any ancient hill figures survive at all in England today is testament to the strength and continuity of local customs and beliefs which, in one case at least, must stretch back over millennia.The Uffington White Horse is a unique, stylised representation of a horse consisting of a long, sleek back, thin disjointed legs, a streaming tail, and a bird-like beaked head. The elegant creature almost melts into the landscape. The horse is situated 2.5 km from Uffington village on a steep slope close to the Late Bronze Age* (c. 7th century BCE) hillfort of Uffington Castle and below the Ridgeway, a long-distance Neolithic** track.The Uffington Horse is also surrounded by Bronze Age burial mounds. It is not far from the Bronze Age cemetery of Lambourn Seven Barrows, which consists of more than 30 well-preserved burial mounds. The carving has been placed in such a way as to make it extremely difficult to see from close quarters, and like many geoglyphs is best appreciated from the air. Nevertheless, there are certain areas of the Vale of the White Horse, the valley containing and named after the enigmatic creature, from which an adequate impression may be gained. Indeed on a clear day the carving can be seen from up to 30 km away.The earliest evidence of a horse at Uffington is from the 1070s CE when ‘White Horse Hill’ is mentioned in documents from the nearby Abbey of Abingdon, and the first reference to the horse itself is soon after, in 1190 CE. However, the carving is believed to date back much further than that. Due to the similarity of the Uffington White Horse to the stylised depictions of horses on 1st century BCE coins, it had been thought that the creature must also date to that period.However, in 1995 Optically Stimulated Luminescence (OSL) testing was carried out by the Oxford Archaeological Unit on soil from two of the lower layers of the horse’s body, and from another cut near the base. The result was a date for the horse’s construction somewhere between 1400 and 600 BCE – in other words, it had a Late Bronze Age or Early Iron Age origin.The latter end of this date range would tie the carving of the horse in with occupation of the nearby Uffington hillfort, indicating that it may represent a tribal emblem marking the land of the inhabitants of the hillfort. Alternatively, the carving may have been carried out during a Bronze or Iron Age ritual. Some researchers see the horse as representing the Celtic*** horse goddess Epona, who was worshipped as a protector of horses, and for her associations with fertility. However, the cult of Epona was not imported from Gaul (France) until around the first century CE. This date is at least six centuries after the Uffington Horse was probably carved. Nevertheless, the horse had great ritual and economic significance during the Bronze and Iron Ages, as attested by its depictions on jewellery and other metal objects. It is possible that the carving represents a goddess in native mythology, such as Rhiannon, described in later Welsh mythology as a beautiful woman dressed in gold and riding a white horse.The fact that geoglyphs can disappear easily, along with their associated rituals and meaning, indicates that they were never intended to be anything more than temporary gestures. But this does not lessen their importance. These giant carvings are a fascinating glimpse into the minds of their creators and how they viewed the landscape in which they lived.






We should be more tolerant of microbes

Microbes, most of them bacteria, have populated this planet since long before animal life developed and they will outlive us. Invisible to the naked eye, they are ubiquitous. They inhabit the soil, air, rocks and water and are present within every form of life, from seaweed and coral to dogs and humans. And, as Yong explains in his utterly absorbing and hugely important book, we mess with them at our peril.Every species has its own colony of microbes, called a ‘microbiome’, and these microbes vary not only between species but also between individuals and within different parts of each individual. What is amazing is that while the number of human cells in the average person is about 30 trillion, the number of microbial ones is higher – about 39 trillion. At best, Yong informs us, we are only 50 per cent human. Indeed, some scientists even suggest we should think of each species and its microbes as a single unit, dubbed a ‘holobiont’.In each human there are microbes that live only in the stomach, the mouth or the armpit and by and large they do so peacefully. So ‘bad’ microbes are just microbes out of context. Microbes that sit contentedly in the human gut (where there are more microbes than there are stars in the galaxy) can become deadly if they find their way into the bloodstream. These communities are constantly changing too. The right hand shares just one sixth of its microbes with the left hand. And, of course, we are surrounded by microbes. Every time we eat, we swallow a million microbes in each gram of food; we are continually swapping microbes with other humans, pets and the world at large.It’s a fascinating topic and Yong, a young British science journalist, is an extraordinarily adept guide. Writing with lightness and panache, he has a knack of explaining complex science in terms that are both easy to understand and totally enthralling. Yong is on a mission. Leading us gently by the hand, he takes us into the world of microbes – a bizarre, alien planet – in a bid to persuade us to love them as much as he does. By the end, we do.For most of human history we had no idea that microbes existed. The first man to see these extraordinarily potent creatures was a Dutch lens-maker called Antony van Leeuwenhoek in the 1670s. Using microscopes of his own design that could magnify up to 270 times, he examined a drop of water from a nearby lake and found it teeming with tiny creatures he called ‘animalcules’. It wasn’t until nearly two hundred years later that the research of French biologist Louis Pasteur indicated that some microbes caused disease. It was Pasteur’s ‘germ theory’ that gave bacteria the poor image that endures today.Yong’s book is in many ways a plea for microbial tolerance, pointing out that while fewer than one hundred species of bacteria bring disease, many thousands more play a vital role in maintaining our health. The book also acknowledges that our attitude towards bacteria is not a simple one. We tend to see the dangers posed by bacteria, yet at the same time we are sold yoghurts and drinks that supposedly nurture ‘friendly’ bacteria. In reality, says Yong, bacteria should not be viewed as either friends or foes, villains or heroes. Instead we should realise we have a symbiotic relationship, that can be mutually beneficial or mutually destructive.What then do these millions of organisms do? The answer is pretty much everything. New research is now unravelling the ways in which bacteria aid digestion, regulate our immune systems, eliminate toxins, produce vitamins, affect our behaviour and even combat obesity. ‘They actually help us become who we are,’ says Yong. But we are facing a growing problem. Our obsession with hygiene, our overuse of antibiotics and our unhealthy, low-fibre diets are disrupting the bacterial balance and may be responsible for soaring rates of allergies and immune problems, such as inflammatory bowel disease (IBD).The most recent research actually turns accepted norms upside down. For example, there are studies indicating that the excessive use of household detergents and antibacterial products actually destroys the microbes that normally keep the more dangerous germs at bay. Other studies show that keeping a dog as a pet gives children early exposure to a diverse range of bacteria, which may help protect them against allergies later.The readers of Yong’s book must be prepared for a decidedly unglamorous world. Among the less appealing case studies is one about a fungus that is wiping out entire populations of frogs and that can be halted by a rare microbial bacterium. Another is about squid that carry luminescent bacteria that protect them against predators. However, if you can overcome your distaste for some of the investigations, the reasons for Yong’s enthusiasm become clear. The microbial world is a place of wonder. Already, in an attempt to stop mosquitoes spreading dengue fever – a disease that infects 400 million people a year – mosquitoes are being loaded with a bacterium to block the disease. In the future, our ability to manipulate microbes means we could construct buildings with useful microbes built into their walls to fight off infections. Just imagine a neonatal hospital ward coated in a specially mixed cocktail of microbes so that babies get the best start in life.



Children learn to construct language from those around them. Until about the age of three, children tend to learn to develop their language by modeling the speech of their parents, but from that time on, peers have a growing influence as models for language development in children. It is easy to observe that, when adults and older children interact with younger children, they tend to modify their language to improve communication with younger children, and this modified language is called caretaker speech. Caretaker speech is used often quite unconsciously; few people actually study how to modify language when speaking to young children but, instead, without thinking, find ways to reduce the complexity of language in order to communicate effectively with young children. A caretaker will unconsciously speak in one way with adults and in a very different way with young children. Caretaker speech tends to be slower speech with short, simple words and sentences which are said in a higher-pitched voice with exaggerated inflections and many repetitions of essential information. It is not limited to what is commonly called baby talk, which generally refers to the use of simplified, repeated syllable expressions such as ma-ma, boo-boo, bye-bye, wa-wa, but also includes the simplified sentence structures repeated in sing-song inflections. Caretaker speech serves the very important function of allowing young children to acquire language more easily. The higher-pitched voice and the exaggerated inflections tend to focus the small child on what the caretaker is saying, the simplified words and sentences make it easier for the small child to begin to comprehend, and the repetitions reinforce the child's developing understanding. Then, as a child's speech develops, caretakers tend to adjust their language in response to the improved language skills, again quite unconsciously. Parents and older children regularly adjust their speech to a level that is slightly above that of a younger child; without studied recognition of what they are doing, these caretakers will speak in one way to a one-year-old and in a progressively more complex way as the child reaches the age of two or three. An important point to note is that the function covered by caretaker speech, that of assisting a child to acquire language in small and simple steps, is an unconsciously used but extremely important part of the process of language acquisition and as such is quite universal. Studying cultures where children do not acquire language through caretaker speech is difficult because such cultures are difficult to find. The question of why caretaker speech is universal is not clearly understood; instead proponents on either side of the nature vs. nurture debate argue over whether caretaker speech is a natural function or a learned one. Those who believe that caretaker speech is a natural and inherent function in humans believe that it is human nature for children to acquire language and for those around them to encourage their language acquisition naturally; the presence of a child is itself a natural stimulus that increases the rate of caretaker speech among those present. In contrast, those who believe that caretaker speech develops through nurturing rather than nature argue that a person who is attempting to communicate with a child will learn by trying out different ways of communicating to determine which is the most effective from the reactions to the communication attempts; a parent might, for example, learn to use speech with exaggerated inflections with a small child because the exaggerated inflections do a better job of attracting the child's attention than do more subtle inflections. Whether caretaker speech results from nature or nurture, it does play an important and universal role in chid language acquisition.



Coral colonies require a series of complicated events and circumstances to develop into the characteristically intricate reef structures for which they are known. These events and circumstances involve physical and chemical processes as well as delicate interactions among various animals and plants for coral colonies to thrive. The basic element in the development of coralline reef structures is a group of animals from the Anthozoa class, called stony corals, that is closely related to jellyfish and sea anemones. These small polyps (the individual animals that make. up the coral reef), which are for the most part only a fraction of an inch in length, live in colonies made up of an immeasurable number of polyps clustered together. Each individual polyp obtains calcium from the seawater where it lives to create a skeleton around the lower part of its body, and the polyps attach themselves both to the living tissue and to the external skeletons of other polyps. Many polyps tend to retreat inside of their skeletons during hours of daylight and then stretch partially outside of their skeletons during hours of darkness to feed on minute plankton from the water around them. The mouth at the top of each body is surrounded by rings of tentacles used to grab onto food, and these rings of tentacles make the polyps look like flowers with rings of clustered petals; because of this, biologists for years thought that corals were plants rather than animals. Once these coralline structures are established, they reproduce very quickly. They build in upward and outward directions to create a fringe of living coral surrounding the skeletal remnants of once-living coral. That coralline structures are commonplace in tropical waters around the world is due to the fact that they reproduce so quickly rather than the fact that they are hardy life-forms easily able to withstand external forces of nature. They cannot survive in water that is too dirty, and they need water that is at least 72° F (or 22° C) to exist, so they are formed only in waters ranging from 30° north to 30° south of the equator. They need a significant amount of sunlight, so they live only within an area between the surface of the ocean and a few meters beneath it. In addition, they require specific types of microscopic algae for their existence, and their skeletal shells are delicate in nature and are easily damaged or fragmented. They are also prey to other sea animals such as sponges and clams that bore into their skeletal structures and weaken them.           Coral colonies cannot build reef structures without considerable assistance. The many openings in and among the skeletons must be filled in and cemented together by material from around the colonies. The filling material often consists of fine sediments created either from the borings and waste of other animals around the coral or from the skeletons, shells, and remnants of dead plants and animals. The material that is used to cement the coral reefs comes from algae and other microscopic forms of seaweed. An additional part of the process of reef formation is the ongoing compaction and cementation that occurs throughout the process. Because of the soluble and delicate nature of the material from which coral is created, the relatively unstable crystals of corals and shells break down over time and are then rearranged as a more stable form of limestone. The coralline structures that are created through these complicated processes are extremely variable in form. They may, for example, be treelike and branching, or they may have more rounded and compact shapes. What they share in common, however, is the extraordinary variety of plant and animal life-forms that are a necessary part of the ongoing process of their formation.



America's passion for the automobile developed rather quickly in the beginning of the twentieth century. At the turn of that century, there were few automobiles, or horseless carriages, as they were called at the time, and those that existed were considered frivolous playthings of the rich. They were rather fragile machines that sputtered and smoked and broke down often; they were expensive toys that could not be counted on to get one where one needed to go; they could only be afforded by the wealthy class, who could afford both the expensive upkeep and the inherent delays that resulted from the use of a machine that tended to break down time and again. These early automobiles required repairs so frequently both because their engineering was at an immature stage and because roads were unpaved and often in poor condition. Then, when breakdowns occurred, there were no services such as roadside gas stations or tow trucks to assist drivers needing help in their predicament. Drivers of horse-drawn carriages considered the horseless mode of transportation foolhardy, preferring instead to rely on their four-legged "engines," which they considered a tremendously more dependable and cost-effective means of getting around. Automobiles in the beginning of the twentieth century were quite unlike today's models. Many of them were electric cars, even though the electric models had quite a limited range and needed to be recharged frequently at electric charging stations; many others were powered by steam, though it was often required that drivers of steam cars be certified steam engineers due to the dangers inherent in operating a steam-powered machine. The early automobiles also lacked much emphasis on body design; in fact, they were often little more than benches on wheels, though by the end of the first decade of the century they had progressed to leather-upholstered chairs or sofas on thin wheels that absorbed little of the incessant pounding associated with the movement of these machines. In spite of the rather rough and undeveloped nature of these early horseless carriages, something about them grabbed people's imagination, and their use increased rapidly, though not always smoothly. In the first decade of the last century, roads were shared by the horse-drawn and horseless variety of carriages, a situation that was rife with problems and required strict measures to control the incidents and accidents that resulted when two such different modes of transportation were used in close proximity. New York City, for example, banned horseless vehicles from Central Park early in the century because they had been involved in so many accidents, often causing injury or death; then, in 1904, New York state felt that it was necessary to control automobile traffic by placing speed limits of 20 miles per hour in open areas, 15 miles per hour in villages, and 1 0 miles per hour in cities or areas of congestion. However, the measures taken were less a means of limiting use of the automobile and more a way of controlling the effects of an invention whose use increased dramatically in a relatively short period of time. Under 5,000 automobiles were sold in the United States for a total cost of approximately $5 million in 1900, while considerably more cars, 181,000, were sold for $215 million in 1910, and by the middle of the 1920s, automobile manufacturing had become the top industry in the United States and accounted for 6 percent of the manufacturing in the country.



There is still much for astronomers to learn about pulsars. Based on what is known, the term pulsar is used to describe the phenomenon of short, precisely timed radio bursts that are emitted from somewhere in space. Though all is not known about pulsars, they are now believed in reality to emanate from spinning neutron stars, highly reduced cores of collapsed stars that are theorized to exist. Pulsars were discovered in 1967, when Jocelyn Bell, a graduate student at Cambridge University, noticed an unusual pattern on a chart from a radio telescope. What made this pattern unusual was that, unlike other radio signals from celestial objects, this series of pulses had a highly regular period of 1.33730119 seconds. Because day after day the pulses came from the same place among the stars, Cambridge researchers came to the conclusion that they could not have come from a local source such as an Earth satellite. A name was needed for this newly discovered phenomenon. The possibility that the signals were coming from a distant civilization was considered, and at that point the idea of naming the phenomenon L.G.M. (short for Little Green Men) was raised. However, after researchers had found three more regularly pulsing objects in other parts of the sky over the next few weeks, the name pulsar was selected instead of L.G.M. As more and more pulsars were found, astronomers engaged in debates over their nature. It was determined that a pulsar could not be a star inasmuch as a normal star is too big to pulse so fast. The question was also raised as to whether a pulsar might be a white dwarf star, a dying star that has collapsed to approximately the size of the Earth and is slowly cooling off. However, this idea was also rejected because the fastest pulsar known at the time pulsed around thirty times per second and a white dwarf, which is the smallest known type of star, would not hold together if it were to spin that fast. The final conclusion among astronomers was that only a neutron star, which is theorized to be the remaining core of a collapsed star that has been reduced to a highly dense radius of only around 10 kilometers, was small enough to be a pulsar. Further evidence of the link between pulsars and neutron stars was found in 1968, when a pulsar-was found in the middle of the Crab Nebula. The Crab Nebula is what remains of the supernova of the year 1054, and inasmuch as it has been theorized that neutron stars sometimes remain following supernova explosions, it is believed that the pulsar coming from the Crab Nebula is evidently just such a neutron star. The generally accepted theory for pulsars is the lighthouse theory, which is based upon a consideration of the theoretical properties of neutron stars and the observed properties of pulsars. According to the lighthouse theory, a spinning neutron star emits beams of radiation that sweep through the sky, and when one of the beams passes over the Earth, it is detectable on Earth. It is known as the lighthouse theory because the emissions from neutron stars are similar to the pulses of light emitted from lighthouses as they sweep over the ocean; the name lighthouse is therefore actually more appropriate than the name pulsar.



Schizophrenia is in reality a cluster of psychological disorders in which a variety of behaviors are exhibited and which are classified in various ways. Though there are numerous behaviors that might be considered schizophrenic, common behaviors that manifest themselves in severe schizophrenic disturbances are thought disorders, delusions, and emotional disorders. Because schizophrenia is not a single disease but is in reality a cluster of related disorders, schizophrenics tend to be classified into various subcategories. The various subcategories of schizophrenia are based on the degree to which the various common behaviors are manifested in the patient as well as other factors such as the age of the schizophrenic patient at the onset of symptoms and the duration of the symptoms. Five of the more common subcategories of schizophrenia are simple, hebephrenic, paranoid, catatonic, and acute. The main characteristic of simple schizophrenia is that it begins at a relatively early age and manifests itself in a slow withdrawal from family and social relationships with a gradual progression toward more severe symptoms over a period of years. Someone suffering from simple schizophrenia may early on simply be apathetic toward life, may maintain contact with reality a great deal of the time, and may be out in the world rather than hospitalized. Over time, however, the symptoms, particularly thought and emotional disorders, increase in severity. Hebephrenic schizophrenia is a relatively severe form of the disease that is characterized by severely disturbed thought processes as well as highly emotional and bizarre behavior. Those suffering from hebephrenic schizophrenia have hallucinations and delusions and appear quite incoherent; their behavior is often extreme and quite inappropriate to the situation, perhaps full of unwarranted laughter, or tears, or obscenities that seem unrelated to the moment. This type of schizophrenia represents a rather severe and ongoing disintegration of personality that makes this ·type of schizophrenic unable to play a role in society. Paranoid schizophrenia is a different type of schizophrenia in which the outward behavior of the schizophrenic often seems quite appropriate; this type of schizophrenic is often able to get along in society for long periods of time. However, a paranoid schizophrenic suffers from extreme delusions of persecution, often accompanied by delusions of grandeur. While this type of schizophrenic has strange delusions and unusual thought processes, his or her outward behavior is not as incoherent or unusual as a hebephrenic's behavior. A paranoid schizophrenic can appear alert and intelligent much of the time but can also turn suddenly hostile and violent in response to imagined threats. Another type of schizophrenia is the catatonic variety, which is characterized by alternating periods of extreme excitement and stupor. There are abrupt changes in behavior, from frenzied periods of excitement to stuporous periods of withdrawn behavior. During periods of excitement, the catatonic schizophrenic may exhibit excessive and sometimes violent behavior; during the periods of stupor, the catatonic schizophrenic may remain mute and unresponsive to the environment. A final type of schizophrenia is acute schizophrenia, which is characterized by a sudden onset of schizophrenic symptoms such as confusion, excitement, emotionality, depression, and irrational fear. The acute schizophrenic, unlike the simple schizophrenic, shows a sudden onset of the disease rather than a slow progression from one stage of it to the other. Additionally, the acute schizophrenic exhibits various types of schizophrenic behaviors during different episodes, sometimes exhibiting the characteristics of hebephrenic, catatonic, or even paranoid schizophrenia. In this type of schizophrenia, the patient 's personality seems to have completely disintegrated.



In a theoretical model of decision making, a decision is defined as the process of selecting one option from among a group of options for implementation. Decisions are formed by a decision maker, the one who actually chooses the final option, in conjunction with a decision unit, all of those in the organization around the decision maker who take part in the process. In this theoretical model, the members of the decision unit react to an unidentified problem by studying the problem, determining the objectives of the organization, formulating options, evaluating the strengths and weaknesses of each of the options, and reaching a conclusion. Many different factors can have an effect on the decision, including the nature of the problem itself, external forces exerting an influence on the organization, the internal dynamics of the decision unit, and the personality of the decision maker. During recent years, decision making has been studied systematically by drawing from such diverse areas of study as psychology, sociology, business, government, history, mathematics, and statistics. Analyses of decisions often emphasize one of three principal conceptual perspectives {though often the approach that is actually employed is somewhat eclectic). In the oldest of the three approaches, decisions are made by a rational actor, who makes a particular decision directly and purposefully in response to a specific threat from the external environment. It is assumed that this rational actor has clear objectives in mind, develops numerous reasonable options, considers the advantages and disadvantages of each option carefully, chooses the best option after careful analysis, and then proceeds to implement it fully. A variation of the rational actor model is a decision maker who is a satisfier, one who selects the first satisfactory option rather than continuing the decision­ making process until the optimal decision has been reached. A second perspective places an emphasis on the impact of routines on decisions within organizations. It demonstrates how organizational structures and routines such as standard operating procedures tend to limit the decision-making process in a variety of  ways, perhaps by restricting  the information available to the decision  unit, by restricting the breadth  of options among which the decision  unit may choose, or by inhibiting the ability  of the organization  to implement  the decision  quickly  and effectively  once it has been taken. Pre-planned routines and standard operating procedures are essential to coordinate the efforts of large numbers of people in massive organizations. However, these same routines and procedures can also have an inhibiting effect on the ability of the organization to arrive at optimal decisions and implement them efficiently. In this sort of decision-making process, organizations tend to take not the optimal decision but the decision that best fits within the permitted operating parameters outlined by the organization. A third conceptual perspective emphasizes the internal dynamics of the decision unit and the extent to which decisions are based on political forces within the organization. This perspective demonstrates how bargaining among individuals who have different interests and motives and varying levels of power in the decision unit leads to eventual compromise that is not the preferred choice of any of the members of the decision unit. Each of these three perspectives on the decision-making process demonstrates a different point of view on decision making, a different lens through which the decision­ making process can be observed. It is safe to say that decision making  in most organizations  shows marked influences  from each perspective; i.e., an organization strives to get as close as possible to the rational model in its decisions, yet the internal routines and dynamics of the organization come into play in the decision.



Millions of different species exist on the earth. These millions of species, which have evolved over billions of years, are the result of two distinct but simultaneously occurring processes: the processes of speciation and extinction. One of the processes that affects the number of species on earth is speciation, which results when one species diverges into two distinct species as a result of disparate natural selection in separate environments.  Geographic  isolation  is one common mechanism that fosters speciation; speciation  as a result of geographic isolation  occurs when two populations of a species become  separated  for long periods  of time into areas with different environmental  conditions. After the two populations are separated, they evolve independently; if this divergence continues long enough, members of the two distinct populations eventually become so different genetically that they are two distinct species rather than one. The process of speciation may occur within hundreds of years for organisms that reproduce rapidly, but for most species the process of speciation can take thousands to millions of years. One example of speciation is the early fox, which over time evolved into two distinct species, the gray fox and the arctic fox. The early fox separated into populations which evolved differently in response to very different environments as the populations moved in different directions, one to colder northern climates and the other to warmer southern climates. The northern population adapted to cold weather by developing heavier fur, shorter ears, noses, and legs, and white fur to camouflage itself in the snow. The southern population adapted to warmer weather by developing lighter fur and longer ears, noses, and legs and keeping its darker fur for better camouflage protection. Another of the processes that affects the number of species on earth is extinction, which refers to the situation in which a species ceases to exist. When environmental conditions change, a species needs to adapt to the new environmental conditions, or it may become extinct. Extinction of a species is not a rare occurrence but is instead a rather commonplace one: it has, in fact, been estimated that more than 99 percent of the.­ species that have ever existed have become extinct. Extinction may occur when a species fails to adapt to evolving environmental conditions in a limited area, a process known as background extinction. In contrast, a broader and more abrupt extinction, known as mass extinction, may come about as a result of a catastrophic event or global climatic change. When such a catastrophic event or global climatic change occurs, some species are able to adapt to the new environment, while those that are unable to adapt become extinct. From geological and fossil evidence, it appears that at least five great mass extinctions have occurred; the last mass extinction occurred approximately 65 million years ago, when the dinosaurs became extinct after 140 million years of existence on earth, marking the end of the Mesozoic Era and the beginning of the Cenozoic Era. The fact that millions of species are in existence today is evidence that speciation has clearly kept well ahead of extinction. In spite of the fact that there have been numerous periods of mass extinction, there is clear evidence that periods of mass extinction have been followed by periods of dramatic increases in new species to fill the void created by the mass extinctions, though it may take 10 million years or more following a mass extinction for biological diversity to be rebuilt through speciation. When the dinosaurs disappeared 65 million years ago, for example, the evolution and speciation of mammals increased spectacularly over the millions of years that ensued.



In the late 1980s, a disaster involving the Exxon Valdez, an oil tanker tasked with transporting oil from southern Alaska to the West Coast of the United States, caused a considerable amount of damage to the environment of Alaska. Crude oil from Alaska's North Slope fields near Prudhoe Bay on the north coast of Alaska is carried by pipeline to the port of Valdez on the southern coast and from there is shipped by tanker to the West Coast. On March 24, 1989, the Exxon Valdez, a huge oil tanker more than three football fields in length, went off course in a 16-kilometer-wide channel in Prince William Sound near Valdez, Alaska, hitting submerged rocks and causing a tremendous oil spill. The resulting oil slick spread rapidly and coated more than 1 ,600 kilometers (1 ,000 miles) of coastline. Though actual numbers can never be known, it is believed that at least a half million birds, thousands of seals and otters, quite a few whales, and an untold number of fish were killed as a result. Decades before this disaster, environmentalists had predicted just such an enormous oil spill in this area because of the treacherous nature of the waters due to the submerged reefs, icebergs, and violent storms there. They had urged that oil be transported to the continental United States by land-based pipeline rather than by oil tanker or by undersea pipeline to reduce the potential damage to the environment posed by the threat of an oil spill. Alyeska, a consortium of the seven oil companies working in Alaska's North Slope fields, argued against such a land-based pipeline on the basis of the length of time that such a pipeline would take to construct and on the belief, or perhaps wishful thinking, that the probability of a tanker spill in the area was extremely low. Government agencies charged with protecting the environment were assured by Alyeska and Exxon that such a pipeline was unnecessary because appropriate protective measures had been taken, that within five hours of any accident there would be enough equipment and trained workers to clean up any spill before it managed to cause much damage. However, when the Exxon Valdez spill actually occurred, Exxon and Alyeska were unprepared, in terms of both equipment and personnel, to deal with the spill. Though it was a massive spill, appropriate personnel and equipment available in a timely fashion could have reduced the damage considerably. Exxon ended up spending billions of dollars on the clean-up itself and, in addition, spent further billions in fines and damages to the state of Alaska, the federal government, commercial fishermen, property owners, and others harmed by the disaster. The total cost to Exxon was more than $8 billion. A step that could possibly have prevented this accident even though the tanker did run into submerged rocks would have been a double hull on the tanker. Today, almost all merchant ships have double hulls, but only a small percentage of oil tankers do. Legislation passed since the spill requires all new tankers to be built with double hulls, but many older tankers have received dispensations to avoid the $25 million cost per tanker to convert a single hulled tanker to one with a double hull. However, compared with the $8.5 billion cost of the Exxon Valdez catastrophe, it is a comparatively paltry sum.



   A considerable body of research has demonstrated a correlation between birth order and aspects such as temperament and behavior, and some psychologists believe that birth order significantly affects the development of personality. Psychologist Alfred Adler was a pioneer in the study of the relationship between birth order and personality. A key point in his research and in the hypothesis that he developed based on it was that it was not the actual numerical birth position that affected personality; instead, it was the similar responses in large numbers of families to children in specific birth order positions that had an effect. For example, first-borns, who have their parents to themselves initially and do not have to deal with siblings in the first part of their lives, tend to have their first socialization experiences with adults and therefore tend to find the process of peer socialization more difficult. In contrast, later-born children have to deal with siblings from the first moment of their lives and therefore tend to have stronger socialization skills. Numerous studies since Adler's have been conducted on the effect of birth order and personality. These studies have tended to classify birth order types into four different categories: first-born, second-born and/or middle, last, and only child. Studies have consistently shown that first-born children tend to exhibit similar positive and negative personality traits. First-borns have consistently been linked with academic achievement in various studies; in one study, the number of National Merit scholarship winners who are first-borns was found to be equal to the number of second- and third-borns combined. First-borns have been found to be more responsible and assertive than those born in other birth-order positions and tend to rise to positions of leadership more often than others; more First-borns have served in the U.S. Congress and as U.S. presidents than have those born in other birth-order positions. However, studies have shown that first-borns tend to be more subject to stress and were considered problem children more often than later-borns. Second-born and/or middle children demonstrate markedly different tendencies from first-borns. They tend to feel inferior to the older child or children because it is difficult for them to comprehend that their lower level of achievement is a function of age rather than ability, and they often try to succeed in areas other than those in which their older sibling or siblings excel. They tend to be more trusting, accepting, and focused on others than the more self-centered first-borns, and they tend to have a comparatively higher level of success in team sports than do first-borns or only children, who more often excel in individual sports. The last-born child is the one who tends to be the eternal baby of the family and thus often exhibits a strong sense of security. Last-borns collectively achieve the highest degree of social success and demonstrate the highest levels of self-esteem of all the birth-order positions. They often exhibit less competitiveness than older brothers and sisters and are more likely to take part in less competitive group games or in social organizations such as sororities and fraternities. Only children tend to exhibit some of the main characteristics of first-borns and some of the characteristics of last-borns. Only children tend to exhibit the strong sense of security and self-esteem exhibited by last-borns while, like first-borns, they are more achievement oriented and more likely than middle- or last-borns to achieve academic success. However, only children tend to have the most problems establishing close relationships and exhibit a lower need for affiliation than other children.


    






Aggressive behavior is any behavior that is intended to cause injury, pain, suffering, damage, or destruction. While aggressive behavior is often thought of as purely physical, verbal attacks such as screaming and shouting or belittling and humiliating comments aimed at causing harm and suffering can also be a type of aggression. What is key to the definition of aggression is that whenever harm is inflicted, be it physical or verbal, it is intentional. 

          Questions about the causes of aggression have long been of concern to both social and biological scientists. Theories about the causes of aggression cover a broad spectrum, ranging from those with biological or instinctive emphases to those that portray aggression as a learned behavior.

          Numerous theories are based on the idea that aggression is an inherent and natural human instinct. Aggression has been explained as an instinct that is directed externally toward others in a process called displacement, and it has been noted that aggressive impulses that are not channeled toward a specific person or group may be expressed indirectly through socially acceptable activities such as sports and competition in a process called catharsis. Biological, or instinctive, theories of aggression have also been put forth by ethologists, who study the behavior of animals in their natural environments. A number of ethologists have, based upon their observations of animals, supported the view that aggression is an innate instinct common to humans. 

          Two different schools of thought exist among those who view aggression as instinct. One group holds the view that aggression can build up spontaneously, with or without outside provocation, and violent behavior will thus result, perhaps as a result of little or no provocation. Another suggests that aggression is indeed an instinctive response but that, rather than occurring spontaneously and without provocation, it is a direct response to provocation from an outside source. 

         In contrast to instinct theories, social learning theories view aggression as a learned behavior. This approach focuses on the effect that role models and reinforcement of behavior have on the acquisition of aggressive behavior. Research has shown that aggressive behavior can be learned through a combination of modeling and positive reinforcement of the aggressive behavior and that children are influenced by the combined forces of observing aggressive behavior in parents, peers, or fictional role models and of noting either positive reinforcement for the aggressive behavior or, minimally, a lack of negative reinforcement for the behavior. While research has provided evidence that the behavior of a live model is more influential than that of a fictional model, fictional models of aggressive behavior such as those seen in movies and on television, do still have an impact on behavior. On-screen deaths or acts of violent behavior in certain television programs or movies can be counted in the tens, or hundreds, or even thousands; while some have argued that this sort of fictional violence does not in and of itself cause violence and may even have a beneficial cathartic effect, studies have shown correlations between viewing of violence and incidences of aggressive behavior in both childhood and adolescence. Studies have also shown that it is not just the modeling of aggressive behavior in either its real-life or fictional form that correlates with increased acts of violence in youths; a critical factor in increasing aggressive behaviors is the reinforcement of the behavior. If the aggressive role model is rewarded rather than punished for violent behavior, that behavior is more likely to be seen as positive and is thus more likely to be imitated.  







The fossil remains of the first flying vertebrates, the pterosaurs, have intrigued paleontologists for more than two centuries. How such large creatures, which   weighed in some cases as much as a piloted hang-glider  and had wingspans from 8 to 12 meters, solved the problems of powered flight, and exactly what these creatures were--reptiles or birds-are among the questions scientists have puzzled over. Perhaps the least controversial assertion about the  pterosaurs is that they were reptiles. Their skulls, pelvises, and hind feet are reptilian. The anatomy of their wings suggests that they did not evolve into the class of birds. In pterosaurs a greatly elongated fourth finger of each forelimb supported a wing like membrane. The other fingers were short and reptilian, with sharp claws. In birds the second finger is the principal strut of the wing, which consists primarily of feathers. If the pterosaurs walked on all fours, the three short fingers may have been employed for grasping. When a  pterosaur walked or remained stationary, the fourth finger, and with it the wing, could only turn upward in an extended inverted V-shape along each side of the animal’s body.    The pterosaurs resembled both birds and bats in  their overall structure and proportions. This is not surprising because the design of any flying vertebrate is subject to aerodynamic constraints. Both the pterosaurs and the birds have hollow bones, a feature that represents a savings in weight. In the birds, however, these bones are reinforced more massively by internal struts.   Although scales typically cover reptiles, the pterosaurs probably had hairy coats. T.H. Huxley reasoned that flying vertebrates must have been warm- blooded because flying implies a high rate of  metabolism, which in turn implies a high internal temperature. Huxley speculated that a coat of hair would insulate against loss of body heat and might streamline the body to reduce drag in flight. The recent discovery of a pterosaur specimen covered in long, dense, and  relatively thick hair like fossil material was the first clear evidence that his reasoning was correct. Efforts to explain how the pterosaurs became air- borne have led to suggestions that they launched them- selves by jumping from cliffs, by dropping from trees or even by rising into light winds from the crests of waves. Each hypothesis has its difficulties. The first wrongly assumes that the pterosaurs’ hind feet resembled a bat’s and could serve as hooks by which the animal could hang in preparation for flight. The second hypothesis seems unlikely because large pterosaurs could not have landed in trees without damaging their wings. The third calls for high waves to channel updrafts. The wind that made such waves however, might have been too strong for the pterosaurs to control their flight once airborne.


    


Literature is at once the most intimate and the most articulate of the arts. It cannot impart its effect through the senses or the nerves as the other arts can; it is beautiful only through the intelligence; it is the mind speaking to the mind; until it has been put into absolute terms, of an invariable significance, it does not exist at all. It cannot awaken this emotion in one, and that in another; if it fails to express precisely the meaning of the author, if it does not say ~him~, it says nothing, and is nothing. So that when a poet has put his heart, much or little, into a poem, and sold it to a magazine, the scandal is greater than when a painter has sold a picture to a patron, or a sculptor has modelled a statue to order. These are artists less articulate and less intimate than the poet; they are more exterior to their work; they are less personally in it; they part with less of themselves in the dicker. It does not change the nature of the case to say that Tennyson and Longfellow and Emerson sold the poems in which they couched the most mystical messages their genius was charged to bear mankind. They submitted to the conditions which none can escape; but that does not justify the conditions, which are none the less the conditions of hucksters because they are imposed upon poets. If it will serve to make my meaning a little clearer, we will suppose that a poet has been crossed in love, or has suffered some real sorrow, like the loss of a wife or child. He pours out his broken heart in verse that shall bring tears of sacred sympathy from his readers, and an editor pays him a hundred dollars for the right of bringing his verse to their notice. It is perfectly true that the poem was not written for these dollars, but it is perfectly true that it was sold for them. The poet must use his emotions to pay his provision bills; he has no other means; society does not propose to pay his bills for him. Yet, and at the end of the ends, the unsophisticated witness finds the transaction ridiculous, finds it repulsive, finds it shabby. Somehow he knows that if our huckstering civilization did not at every moment violate the eternal fitness of things, the poet's song would have been given to the world, and the poet would have been cared for by the whole human brotherhood, as any man should be who does the duty that every man owes it. The instinctive sense of the dishonor which money-purchase does to art is so strong that sometimes a man of letters who can pay his way otherwise refuses pay for his work, as Lord Byron did, for a while, from a noble pride, and as Count Tolstoy has tried to do, from a noble conscience. But Byron's publisher profited by a generosity which did not reach his readers; and the Countess Tolstoy collects the copyright which her husband foregoes; so that these two eminent instances of protest against business in literature may be said not to have shaken its money basis. I know of no others; but there maybe many that I am culpably ignorant of. Still, I doubt if there are enough to affect the fact that Literature is Business as well as Art, and almost as soon. At present business is the only human solidarity; we are all bound together with that chain, whatever interests and tastes and principles separate us.


No very satisfactory account of the mechanism that caused the formation of the ocean basins has yet been given. The traditional view supposes that the upper mantle of the earth behaves as a liquid when it is subjected to small forces for long periods and that differences in temperature under oceans and continents are sufficient to produce convection in the mantle of the earth with rising convection currents under the midocean ridges and sinking currents under the continents. Theoretically, this convection would carry the continental plates along as though they were on a conveyor belt and would provide the forces needed to produce the split that occurs along the ridge. This view may be correct: it has the advantage that the currents are driven by temperature differences that themselves depend on the position of the continents. Such a back-coupling, in which the position of the moving plate has an impact on the forces that move it, could produce complicated and varying motions. On the other hand, the theory is implausible because convection does not normally occur along lines. and it certainly does not occur along lines broken by frequent offsets or changes in direction, as the ridge is. Also it is difficult to see how the theory applies to the plate between the Mid-Atlantic Ridge and the ridge in the Indian Ocean. This plate is growing on both sides, and since there is no intermediate trench, the two ridges must be moving apart. It would be odd if the rising convection currents kept exact pace with them. An alternative theory is that the sinking part of the plate, which is denser than the hotter surrounding mantle, pulls the rest of the plate after it. Again it is difficult to see how this applies to the ridge in the South Atlantic, where neither the African nor the American plate has a sinking part.  Another possibility is that the sinking plate cools the neighboring mantle and produces convection currents that move the plates. This last theory is attractive because it gives some hope of explaining the enclosed seas, such as the Sea of Japan. These seas have a typical oceanic floor, except that the floor is overlaid by several kilo- meters of sediment. Their floors have probably been sinking for long periods. It seems possible that a sinking current of cooled mantle material on the upper side of the plate might be the cause of such deep basins. The enclosed seas are an important feature of the earth’s surface, and seriously require explanation in because, addition to the enclosed seas that are developing at present behind island arcs, there are a number of older ones of possibly similar origin, such as the Gulf of Mexico, the Black Sea, and perhaps the North Sea.






     The first and decisive step in the expansion of Europe overseas was the conquest of the Atlantic Ocean. That the nation to achieve this should be Portugal was the logical outcome of her geographical position and her history. Placed on the extreme margin of the old, classical Mediterranean world and facing the untraversed ocean, Portugal could adapt and develop the knowledge and experience of the past to meet the challenge of the unknown. Some centuries of navigating the coastal waters of Western Europe and Northern Africa had prepared Portuguese seamen to appreciate the problems which the Ocean presented and to apply and develop the methods necessary to overcome them. From the seamen of the Mediterranean, particularly those of Genoa and Venice, they had learned the organization and conduct of a mercantile marine, and from Jewish astronomers and Catalan mapmakers the rudiments of navigation. Largely excluded from a share in Mediterranean commerce at a time when her increasing and vigorous population was making heavy demands on her resources, Portugal turned southwards and westwards for opportunities of trade and commerce. At this moment of national destiny it was fortunate for her that in men of the calibre of Prince Henry, known as the Navigator, and King John II she found resolute and dedicated leaders. The problems to be faced were new and complex. The conditions for navigation and commerce in the Mediterranean were relatively simple, compared with those in the western seas. The landlocked Mediterranean, tideless and with a climatic regime of regular and well-defined seasons, presented few obstacles to sailors who were the heirs of a great body of sea lore garnered from the experiences of many centuries. What hazards there were, in the form of sudden storms or dangerous coasts, were known and could be usually anticipated. Similarly the Mediterranean coasts, though they might be for long periods in the hands of dangerous rivals, were described in sailing directions or laid down on the portolan charts drawn by Venetian, Genoese and Catalan cartographers. Problems of determining positions at sea, which confronted the Portuguese, did not arise. Though the Mediterranean seamen by no means restricted themselves to coastal sailing, the latitudinal extent of the Mediterranean was not great, and voyages could be conducted from point to point on compass bearings; the ships were never so far from land as to make it necessary to fix their positions in latitude by astronomical observations. Having made a landfall on a bearing, they could determine their precise position from prominent landmarks, soundings or the nature of the sea bed, after reference to the sailing directions or charts.     By contrast, the pioneers of ocean navigation faced much greater difficulties. The western ocean which extended, according to the speculations of the cosmographers, through many degrees of latitude and longitude, was an unknown quantity, but certainly subjected to wide variations of weather and without known bounds. Those who first ventured out over its waters did so without benefit of sailing directions or traditional lore. As the Portuguese sailed southwards, they left behind them the familiar constellations in the heavens by which they could determine direction and the hours of the night, and particularly the pole-star from which by a simple operation they could determine their latitude. Along the unknown coasts they were threatened by shallows, hidden banks, rocks and contrary winds and currents, with no knowledge of convenient shelter to ride out storms or of very necessary watering places. It is little wonder that these pioneers dreaded the thought of being forced on to a lee shore or of having to choose between these inshore dangers and the unrecorded perils of the open sea.



     In the past, American colleges and universities were created to serve a dual purpose|m to advance learning and to offer a chance to become familiar with bodies of knowledge already discovered to those who wished it. To create and to impart, these were the hallmarks of American higher education prior to the most recent, tumultuous decades of the twentieth century. The successful institution of higher learning had never been one whose mission could be defined in terms of providing vocational skills or as a strategy for resolving societal problems. In a subtle way Americans believed postsecondary education to be useful, but not necessarily of immediate use. What the student obtained in college became beneficial in later life|m residually, without direct application in the period after graduation.     Another purpose has now been assigned to the mission of American colleges and universities. Institutions of higher learning|m public or private|m commonly face the challenge of defining their programs in such a way as to contribute to the service of the community.     This service role has various applications. Most common are programs to meet the demands of regional employment markets, to provide opportunities for upward social and economic mobility, to achieve racial, ethnic, or social integration, or more generally to produce "productive" as compared to "educated" graduates. Regardless of its precise definition, the idea of a service-university has won acceptance within the academic community.     One need only be reminded of the change in language describing the two-year college to appreciate the new value currently being attached to the concept of a service-related university. The traditional two-year college has shed its pejorative "junior" college label and is generally called a "community" college, a clearly value-laden expression representing the latest commitment in higher education. Even the doctoral degree, long recognized as a required "union card" in the academic world, has come under severe criticism as the pursuit of learning for its own sake and the accumulation of knowledge without immediate application to a professor's classroom duties. The idea of a college or university that performs a triple function|m communicating knowledge to students, expanding the content of various disciplines, and interacting in a direct relationship with society|m has been the most important change in higher education in recent years.     This novel development is often overlooked. Educators have always been familiar with those parts of the two-year college curriculum that have a "service" or vocational orientation. Knowing this, otherwise perceptive commentaries on American postsecondary education underplay the impact of the attempt of colleges and universities to relate to, if not resolve, the problems of society. Whether the subject under review is student unrest, faculty tenure, the nature of the curriculum, the onset of collective bargaining, or the growth of collegiate bureaucracies, in each instance the thrust of these discussions obscures the larger meaning of the emergence of the service-university in American higher education. Even the highly regarded critique of Clark Kerr, currently head of the Carnegie Foundation, which set the parameters of academic debate around the evolution of the so-called "multiversity," failed to take account of this phenomenon and the manner in which its fulfillment changed the scope of higher education. To the extent that the idea of "multiversity" centered on matters of scale|mhow big is too big? how complex is too complex?|mit obscured the fundamental question posed by the service-university: what is higher education supposed to do? Unless the commitment to what Samuel Gould has properly called the "communiversity" is clearly articulated, the success of any college or university in achieving its service-education functions will be effectively impaired. . . .     The most reliable report about the progress of Open Admissions became available at the end of August, 1974. What the document showed was that the dropout rate for all freshmen admitted in September, 1970, after seven semesters, was about48 percent, a figure that corresponds closely to national averages at similar colleges and universities. The discrepancy between the performance of "regular" students (those who would have been admitted into the four-year colleges with 80% high school averages and into the two-year units with 75%) and Open Admissions freshmen provides a better indication of how the program worked. Taken together the attrition rate (from known and unknown causes) was 48 percent, but the figure for regular students was 36 percent while for Open Admissions categories it was 56 percent. Surprisingly, the statistics indicated that the four-year colleges retained or graduated more of the Open Admissions students than the two-year colleges, a finding that did not reflect experience elsewhere. Not surprisingly, perhaps, the figures indicated a close relationship between academic success defined as retention or graduation and high school averages. Similarly, it took longer for the Open Admissions students to generate college credits and graduate than regular students, a pattern similar to national averages. The most important statistics, however, relate to the findings regarding Open Admissions students, and these indicated as a projection that perhaps as many as 70 percent would not graduate from a unit of the City University.


    "The United States seems totally indifferent to our problems," charges French Foreign Minister Claude Cheysson, defending his Government's decision to defy President Reagan and proceed with construction of the Soviet gas pipeline. West German Chancellor Helmut Schmidt endorsed the French action and sounded a similar note. Washington's handling of the pipeline, he said, has "casta shadow over relations" between Europe and the United States," damaging confidence as regards future agreements.'' But it's not just the pipeline that has made a mockery of Versailles. Charges of unfair trade practices and threats of retaliation in a half-dozen industries are flying back and forth over the Atlantic-and the Pacific, too|min a worrisome crescendo. Businessmen, dismayed by the long siege of sluggish economic growth that has left some 30 million people in the West unemployed, are doing what comes naturally: pressuring politicians to restrain imports, subsidize exports, or both. Steelmakers in Bonn and Pittsburgh want help; so do auto makers in London and Detroit, textile, apparel and shoe manufacturers throughout the West and farmers virtually everywhere.     Democratic governments, the targets of such pressure, are worried about their own political fortunes and embarrassed by their failure to generate strong growth and lower unemployment. The temptation is strong to take the path of least resistance and tighten up on trade-even for a Government as devoted to the free market as Ronald Reagan's. In the past 18 months, Washington, beset by domestic producers, has raised new barriers against imports in autos, textiles and sugar. Steel is likely to be next. Nor is the United States alone. European countries, to varying degrees, have also sought to defend domestic markets or to promote exports through generous subsidies. . . .  The upcoming meeting, to consider trade policy for the 1980's, is surely well timed. "It has been suggested often that world trade policy is 'at a crossroads'|mbut such a characterization of the early 1980's may be reasonably accurate," says C. Fred Bergsten, a former Treasury official in the Carter Administration, now director of a new Washington think tank, the Institute for International Economics.     The most urgent question before the leaders of the industrial world is whether they can change the fractious atmosphere of this summer before stronger protective measures are actually put in place. So far, Mr. Bergsten says, words have outweighed deeds. The trade picture is dismal. World trade reached some $2 trillion a year in 1980 and hasn't budged since .In the first half of this year, Mr. Bergsten suspects that trade probably fell as the world economy stayed flat. But, according to his studies, increased protectionism is not the culprit for the slowdown in trade|mat least not yet. The culprit instead is slow growth and recession, and the resulting slump in demand for imports. . . .   But there are fresh problems today that could be severely damaging. Though tariffs and outright quotas are low after three rounds of intense international trade negotiations in the past two decades |mnew trade restraints, often bound up in voluntary agreements between countries to limit particular imports, have sprouted in recent years like mushrooms in a wet wood. Though the new protectionism is more subtle than the old-fashioned variety, it is no less damaging to economic efficiency and, ultimately, to prospects for world economic growth.     A striking feature is that the new protectionism has focused on the same limited sectors in most of the major industrial countries |mtextiles, steel, electronics, footwear, shipbuilding and autos. Similarly, it has concentrated on supply from Japan and the newly industrialized countries.     When several countries try to protect the same industries, the dealings become difficult. Take steel. Since 1977, the European Economic Community has been following a plan to eliminate excess steel capacity, using bilateral import quotas along the way to soften the blow to the steelworkers. The United States, responding to similar pressure at home and to the same problem of a world oversupplied with steel, introduced a "voluntary" quota system in 1969, and, after a brief period of no restraint, developed a complex trigger price mechanism in 1978.


Each spring vast flocks of songbirds migrate north from Mexico to the United States, but since the 1960s their numbers have fallen by up to 50 percent. Frog populations around the world have declined in recent years. The awe-inspiring California condor survives today only because of breeding programs in zoos. Indeed, plant and animal species are disappearing from the earth at an alarming rate, and many scientists believe that human activity is largely responsible. Biodiversity, or the biological variety that thrives in a healthy ecosystem, became the focus of intense international concern during the 1990s. If present trends continue, Harvard University biologist Edward O. Wilson, one of the leading authorities on biodiversity, estimates that the world could lose 20 percent of all existing species by the year 2020. Biodiversity has become such a vogue word that academics have begun to take surveys of scientists to find out what they mean by it. For Adrian Forsyth, director of conservation biology for Conservation International, biodiversity is the totality of biological diversity from the molecular level to the ecosystem level. That includes the distinct species of all living things on Earth. Scientists have identified 1.4 million species, but no one knows how many actually exist, especially in hard-to-reach areas such as the deep heart of a rain forest or the bottom of an ocean. Biologists believe there may be 5 million to 10 million species, though some estimates run as high as 100 million. Habitat destruction as a result of people's use or development of land is considered the leading threat to biodiversity. For example, habitat loss is thought to be causing severe drops in the populations of migratory songbirds in North America, perhaps as much as 50 percent since the 1960s. Scientists studying songbirds that migrate from warm winter quarters in the southern United States, Mexico, and Central America to summer nesting grounds in the northern United States and Canada have found that the birds are losing habitat at both ends of their long journey. In the tropics forests are being cleared for agriculture, and in the north they are being cut down for roads, shopping centers, and housing subdivisions. As a result, bird censuses in the United States have shown a 33 percent decline in the population of rose-breasted grosbeaks since 1980. Another cause of the decline in biodiversity is the introduction of new species. Sometimes a new species is brought to an area intentionally, but sometimes it happens accidentally. In Illinois the native mussel populations in the Illinois River have fallen drastically since the 1993 summer flooding washed large numbers of zebra mussels into the river from Lake Michigan. Zebra mussels, native to the Caspian Sea, were inadvertently introduced to the Great Lakes, probably in the mid-1980s, by oceangoing cargo ships. Pollution is yet another threat to plants and animals. The St. Lawrence River, one habitat of the endangered beluga whale, drains the Great Lakes, historically one of the most industrialized regions in the world. The whales now have such high levels of toxic chemicals stored in their bodies that technically they qualify as hazardous waste under Canadian law. The effects of pollution can be very subtle and hard to prove because often the toxins do not kill animals outright but instead impair their natural defenses against disease or their ability to reproduce. Habitat loss is thought to be one reason for the decline in frog populations worldwide, because frogs live in wetlands, many of which have been filled in over the years for agriculture and development. But researchers theorize that another possible cause is increased exposure to ultraviolet radiation from the Sun as a result of the thinning of the atmosphere's ozone layer; the increased dose of ultraviolet radiation may be suppressing the frogs' immune systems, making them more vulnerable to a wide range of diseases. Of all the causes of species extinction and habitat loss, the one that seems to be at the heart of the matter is the size of the population of just one species, Homo sapiens. In 1994 the world population was estimated at more than 5.6 billion, more than double the number in 1950. With a larger population come increased demands for food, clothing, housing, and energy, all of which will likely lead to greater habitat destruction, more pollution, and less biological diversity. The number of people in the world continues to grow, but there is evidence that the population of the industrialized nations has more or less stabilized. That's important because although the population of these countries makes up only 25 percent of the world total, the developed world consumes 75 percent of the world's resources. The United Nations is treating the increase in the world's population as a serious matter. A 1994 UN-sponsored conference on population produced a 113-page plan to stabilize the number of people in the world at 7.27 billion by 2015. Otherwise, the UN feared, world population could mushroom to 12.5 billion by 2050.


Although new and effective AIDS drugs have brought hope to many HIV-infected persons, a number of social and ethical dilemmas still confront researchers and public-health officials. The latest combination drug therapies are far too expensive for infected persons in the developing world—particularly in sub-Saharan Africa, where the majority of AIDS deaths have occurred. In these regions, where the incidence of HIV infection continues to soar, the lack of access to drugs can be catastrophic. In 1998, responding to an international outcry, several pharmaceutical firms announced that they would slash the price of AIDS drugs in developing nations by as much as 75 percent. However, some countries argued that drug firms had failed to deliver on their promises of less expensive drugs. In South Africa government officials developed legislation that would enable the country to override the patent rights of drug firms by importing cheaper generic medicines made in India and Thailand to treat HIV infection. In 1998, 39 pharmaceutical companies sued the South African government on the grounds that the legislation violated international trade agreements. Pharmaceutical companies eventually dropped their legal efforts in April 2001, conceding that South Africa’s legislation did comply with international trading laws. The end of the legal battle was expected to pave the way for other developing countries to gain access to more affordable AIDS drugs. AIDS research in the developing world has raised ethical questions pertaining to the clinical testing of new therapies and potential vaccines. For example, controversy erupted over 1997 clinical trials that tested a shorter course of Zidovudine (or AZT) therapy in HIV-infected pregnant women in developing countries. Earlier studies had shown that administering AZT to pregnant women for up to six months prior to birth could cut mother-to-child transmission of HIV by up to two-thirds. The treatment’s $800 cost, however, made it too expensive for patients in developing nations. The controversial 1997 clinical trials, which were conducted in Thailand and other regions in Asia and Africa, tested a shorter course of AZT treatment, costing only $50. Some pregnant women received AZT, while others received a placebo—a medically inactive substance often used in drug trials to help scientists determine the effectiveness of the drug under study. Ultimately the shorter course of AZT treatment proved to be successful and is now standard practice in a growing number of developing nations. However, at the time of the trials, critics charged that using a placebo on HIV-infected pregnant women—when AZT had already been shown to prevent mother-to-child transmission—was unethical and needlessly placed babies at fatal risk. Defenders of the studies countered that a placebo was necessary to accurately gauge the effectiveness of the AZT short-course treatment. Some critics speculated whether such a trial, while apparently acceptable in the developing nations of Asia and Africa, would ever have been viewed as ethical, or even permissible, in a developed nation like the United States. Similar ethical questions surround the testing of AIDS vaccines in developing nations. Vaccines typically use weakened or killed HIV to spark antibody production. In some vaccines, these weakened or killed viruses have the potential to cause infection and disease. Critics questioned whether it is ethical to place all the risk on test subjects in developing regions such as sub-Saharan Africa, where a person infected by a vaccine would have little or no access to medical care. At the same time, with AIDS causing up to 5,500 deaths a day in Africa, others feel that developing nations must pursue any medical avenue for stemming the epidemic and protecting people from the virus. For the struggling economies of some developing nations, AIDS has brought yet another burden: AIDS tends to kill young adults in the prime of their lives—the primary breadwinners and caregivers in families. According to figures released by the United Nations in 1999, AIDS has shortened the life expectancy in some African nations by an average of seven years. In Zimbabwe, life expectancy has dropped from 61 years in 1993 to 49 in 1999. The next few decades may see it fall as low as 41 years. Upwards of 11 million children have been orphaned by the AIDS epidemic. Those children who survive face a lack of income, a higher risk of malnutrition and disease, and the breakdown of family structure. In Africa, the disease has had a heavy impact on urban professionals—educated, skilled workers who play a critical role in the labor force of industries such as agriculture, education, transportation, and government. The decline in the skilled workforce has already damaged economic growth in Africa, and economists warn of disastrous consequences in the future. The social, ethical, and economic effects of the AIDS epidemic are still being played out, and no one is certain what the consequences will be. Despite the many grim facts of the AIDS epidemic, however, humanity is armed with proven, effective weapons against the disease: knowledge, education, prevention, and the ever-growing store of information about the virus’s actions.





Solar storms are natural events that occur when high energy particles from the sun hit the earth. They take place when the sun releases energy in the form of outbursts or eruptions. Such outbursts are also called solar flares. Energy is set free and transported to outer space.

Solar storms contain gas and other matter and can travel at extremely high speeds. When such particles hit the Earth or any other planet with an atmosphere they cause a geomagnetic storm - a disturbance in the magnetic field that surrounds our planet. Normally such outbursts are not dangerous. They are the cause of polar lights - bright, colorful lights in the skies in the northern regions. They may, however, endanger us in other ways. Such outbursts of the sun’s energy can cause communication problems, interfere with satellite reception or lead to incorrect GPS readings. In the past they have even shut down electric power grids. The most damaging events happened in the 19th century when solar storms started fires in North America and Europe. They caused auroras as far south as the equator. Luckily the world did not have the high technological standard we have today. Such forceful eruptions could do much more damage today. An American investigation in 2008 showed that extreme solar storms could cause billions of dollars in damage. Several organizations around the world monitor the sun’s activity and the disturbances that occur in its atmosphere. They also have detectors that show variations in the Earth’s magnetic field. Solar cycles repeat themselves every 11 years. Right now the Earth is experiencing the most severe solar storm since 2003. Sky watchers in Canada and Scandinavia are already reporting sightings of more northern lights than usual. As the sun is currently becoming more active we will see more and more solar flares the next few years. However the solar cycle we are in at the moment is relatively quiet compared to others over the last decades. The last major problems caused by solar storms occurred in 1994 when communications satellites over Canada malfunctioned and power in many parts of the country went out for a few hours. When solar storms pass through the earth’s atmosphere radiation levels are higher for a few days Airlines are especially worried about these outbursts of radiation because long distance flights use polar routes, an area where disruptions are most severe. During such storms there are periods when the crew cannot communicate with ground control stations. Astronauts orbiting the earth in the International Space Station may also be in danger because radiation levels are much higher than normal. Outbursts of solar energy even affect animals which are sensitive to changes in Earth’s magnetic field. During such events they lose orientation and get lost.


Although the overall situation of women has improved in the past decades they still are discriminated against when it comes to work. They get paid less than men for the same work that they do and in some cases do not have the same opportunities as men to reach high-ranking positions. However this is starting to change. Especially organizations, like the United Nations or UNESCO are giving women better opportunities. Many European Union countries have introduced quotas for women in high-ranking positions. But in other areas they are still second-class citizens In industrial countries of the developed world they have become more than equal to males. In the past four decades the proportion of women who have paid jobs has gone up from below half to 64 %. There are, however, differences from country to country. While in Scandinavian countries almost three quarters of all women have a job, the number of females on the labor force in southern and eastern Europe is only about 50%. The role of women has changed drastically during the 20th century. In the early 1900s female workers were employed mainly in factories or worked as servants. In the course of time they got more educated and started working as nurses, teachers, even doctors and lawyers In the 1960s, women, for the first time, were able to actively plan their families. Birth control pills and other contraceptives made it possible for women to have a career, family or even both. Many went to high school and college and sought a job. In the 1970s women in developed countries started to become a major part of the workforce More females in the workforce have brought along many advantages for industries and employers. They have a wider variety of workers to choose from and women often have better ideas and make positive contributions to how things are done. Additional workers also help the economy thrive. They spend money and contribute to the growth of national income. In many countries they provide extra income for a country whose population is getting older and older. In America, economists think that the GDP is 25% larger than it would be without women on the workforce. According to a new survey about one billion women are expected to enter the workforce in the next decade. This should not only contribute to economic growth but also improve gender equality. Even though women should be treated equally, they still get, on average, about 18% less pay for the same work. Females suffer from inequalities in other areas too. Many women wish to start a career and search for fulfillment outside family life. However, in most cases it is harder for them to get to the absolute top than it is for men. Only about 3% of the top CEOs are women. While the situation of women in developed countries may have come to a standstill, females in Asian countries, like China, Singapore or South Korea are experiencing a boom in good job offers. More and more of them are reaching top positions. One of the issues that still are hard for a woman to manage is child care. Not only do they spend more on education and baby sitters, especially single mothers who raise a child alone find it nearly impossible to reach a top position at the same time. Even if a woman has a working husband, men are not keen on taking leave to care for the baby. Most men still consider this a woman’s job Nevertheless, there are countries where women and men lead equal lives and also find equal opportunity. Among Scandinavian countries, which generally offer many opportunities for women, Iceland ranks first. The United States is currently in 19th place, up from the 31st spot, mainly because President Obama has offered women more jobs in government offices. At the bottom of the list are developing countries like Yemen and Pakistan. 


Rice is one of the world’s most important food crops . It is a grain, like wheat and corn. Almost all the people who depend on rice for their food live in Asia. Young rice plants are bright green. After planting, the grain is ripe about 120 to 180 days later. It turns golden yellow during the time of harvest . In some tropical countries rice can be harvested up to three times a year. Each rice plant carries hundreds or thousands of kernels . A typical rice kernel is 6—10 mm long and has four parts: The hull is the hard outer part which is not good to eat. The bran layers protect the inner parts of the kernel They have vitamins and minerals in them. The endosperm makes up most of the kernel. It has a lot of starch in it. The embryo is a small part from which a new rice plant can grow. Rice grows best in tropical regions. It needs a lot of water and high temperatures. It grows on heavy, muddy soils that can hold water. In many cases farmers grow rice in paddies. These are fields that have dirt walls around them to keep the water inside. The fields are flooded with water and seeds or small rice plants are placed into the muddy soil . In southeast Asia and other developing countries farmers do most of the work by hand. They use oxen or water buffaloes to pull the ploughs . In industrialized countries work is done mostly by machines. Two or three weeks before the harvest begins water is pumped out of the fields. The rice is cut and the kernels are separated from the rest of the plant. The wet kernels are laid on mats to dry in the sun. better. Sometimes brown rice , in which the bran layers remain, is produced . Then it is packaged and sold. Rice gives your body energy in the form of carbohydrates . It also has vitamin B and other minerals in it. Rice has little fat and is easy to digest . Rice is in many other foods as well. It is in breakfast cereals , frozen and baby foods and soup. Breweries use rice to make beer. In Japan , rice kernels are used to make an alcoholic drink Most rice is grown in lowland regions but about one fifth of the world’s rice is upland rice , which grows on terraces in the mountains. The world’s farmers grow more than 700 million tons a year. 90 % of the rice production comes from Asia. China and India are the world’s biggest producers. In these countries rice is planted in the big river plains of the Ganges and Yangtze. Almost all of Asia’s rice is eaten by the population there. Sometimes they don’t even have enough to feed their own people. Other counties , like the USA, produce rice for export.


In the past thirty years, Americans’ consumption of restaurant and take-out food has doubled. The result, according to many health watchdog groups, is an increase in overweight and obesity. Almost 60 million Americans are obese, costing $117 billion each year in health care and related costs. Members of Congress have decided they need to do something about the obesity epidemic. A bill was recently introduced in the House that would require restaurants with twenty or more locations to list the nutritional content of their food on their menus. A Senate version of the bill is expected in the near future. Our legislators point to the trend of restaurants’ marketing larger meals at attractive prices. People order these meals believing that they are getting a great value, but what they are also getting could be, in one meal, more than the daily recommended allowances of calories, fat, and sodium. The question is, would people stop “supersizing,” or make other healthier choices if they knew the nutritional content of the food they’re ordering? Lawmakers think they would, and the gravity of the obesity problem has caused them to act to change menus. The Menu Education and Labeling, or MEAL, Act, would result in menus that look like the nutrition facts panels found on food in supermarkets. Those panels are required by the 1990 Nutrition Labeling and Education Act, which exempted restaurants. The new restaurant menus would list calories, fat, and sodium on printed menus, and calories on menu boards, for all items that are offered on a regular basis (daily specials don’t apply). But isn’t this simply asking restaurants to state the obvious? Who isn’t aware that an order of supersize fries isn’t health food? Does anyone order a double cheeseburger thinking they’re being virtuous? Studies have shown that it’s not that simple. In one, registered dieticians couldn’t come up with accurate estimates of the calories found in certain fast foods. Who would have guessed that a milk shake, which sounds pretty healthy (it does contain milk, after all) has more calories than three McDonald’s cheeseburgers? Or that one chain’s chicken breast sandwich, another better-sounding alternative to a burger, contains more than half a day’s calories and twice the recommended daily amount of sodium? Even a fast-food coffee drink, without a doughnut to go with it, has almost half the calories needed in a day. The restaurant industry isn’t happy about the new bill. Arguments against it include the fact that diet alone is not the reason for America’s obesity epidemic. A lack of adequate exercise is also to blame. In addition, many fast food chains already post nutritional information on their websites, or on posters located in their restaurants. Those who favor the MEAL Act, and similar legislation, say in response that we must do all we can to help people maintain a healthy weight. While the importance of exercise is undeniable, the quantity and quality of what we eat must be changed. They believe that if we want consumers to make better choices when they eat out, nutritional information must be provided where they are selecting their food. Restaurant patrons are not likely to have memorized the calorie counts they may have looked up on the Internet, nor are they going to leave their tables, or a line, to check out a poster that might be on the opposite side of the restaurant.



In 1904, the U.S. Patent Office granted a patent for a board game called “The Landlord’s Game,” which was invented by a Virginia Quaker named Lizzie Magie. Magie was a follower of Henry George, who started a tax movement that supported the theory that the renting of land and real estate produced an unearned increase in land values that profited a few individuals (landlords) rather than the majority of the people (tenants). George proposed a single federal tax based on land ownership; he believed this tax would weaken the ability to form monopolies, encourage equal opportunity, and narrow the gap between rich and poor. Lizzie Magie wanted to spread the word about George’s proposal, making it more understandable to a majority of people who were basically unfamiliar with economics. As a result, she invented a board game that would serve as a teaching device. The Landlord’s Game was intended to explain the evils of monopolies, showing that they repressed the possibility for equal opportunity. Her instructions read in part: “The object of this game is not only to afford amusement to players, but to illustrate to them how, under the present or prevailing system of land tenure, the landlord has an advantage over other enterprisers, and also how the single tax would discourage speculation.” The board for the game was painted with forty spaces around its perimeter, including four railroads, two utilities, twenty-two rental properties, and a jail. There were other squares directing players to go to jail, pay a luxury tax, and park. All properties were available for rent, rather than purchase. Magie’s invention became very popular, spreading through word of mouth, and altering slightly as it did. Since it was not manufactured by Magie, the boards and game pieces were homemade. Rules were explained and transmuted, from one group of friends to another. There is evidence to suggest that The Landlord’s Game was played at Princeton, Harvard, and the University of Pennsylvania. In 1924, Magie approached George Parker (President of Parker Brothers) to see if he was interested in purchasing the rights to her game. Parker turned her down, saying that it was too political. The game increased in popularity, migrating north to New York state, west to Michigan, and as far south as Texas. By the early 1930s, it reached Charles Darrow in Philadelphia. In 1935, claiming to be the inventor, Darrow got a patent for the game, and approached Parker Brothers. This time, the company loved it, swallowed Darrow’s prevarication, and not only purchased his patent, but paid him royalties for every game sold. The game quickly became Parker Brothers’ bestseller, and made the company, and Darrow, millions of dollars. When Parker Brothers found out that Darrow was not the true inventor of the game, they wanted to protect their rights to the successful game, so they went back to Lizzie Magie, now Mrs. Elizabeth Magie Phillips of Clarendon, Virginia. She agreed to a payment of $500 for her patent, with no royalties, so she could stay true to the original intent of her game’s invention. She therefore required in return that Parker Brothers manufacture and market The Landlord’s Game in addition to Monopoly. However, only a few hundred games were ever produced. Monopoly went on to become the world’s bestselling board game, with an objective that is the exact opposite of the one Magie intended: “The idea of the game is to buy and rent or sell property so profitably that one becomes the wealthiest player and eventually monopolist. The game is one of shrewd and amusing trading and excitement.”


Why we need to protect polar bearsPolar bears are being increasingly threatened by the effects of climate change, but their disappearance could have far-reaching consequences. They are uniquely adapted to the extreme conditions of the Arctic Circle, where temperatures can reach -40°C. One reason for this is that they have up to 11 centimetres of fat underneath their skin. Humans with comparative levels of adipose tissue would be considered obese and would be likely to suffer from diabetes and heart disease. Yet the polar bear experiences no such consequences.A 2014 study by Shi Ping Liu and colleagues sheds light on this mystery. They compared the genetic structure of polar bears with that of their closest relatives from a warmer climate, the brown bears. This allowed them to determine the genes that have allowed polar bears to survive in one of the toughest environments on Earth. Liu and his colleagues found the polar bears had a gene known as APoB, which reduces levels of low-density lipoproteins (LDLs) – a form of ‘bad’ cholesterol. In humans, mutations of this gene are associated with increased risk of heart disease. Polar bears may therefore be an important study model to understand heart disease in humans.The genome of the polar bear may also provide the solution for another condition, one that particularly affects our older generation: osteoporosis. This is a disease where bones show reduced density, usually caused by insufficient exercise, reduced calcium intake or food starvation. Bone tissue is constantly being remodelled, meaning that bone is added or removed, depending on nutrient availability and the stress that the bone is under. Female polar bears, however, undergo extreme conditions during every pregnancy. Once autumn comes around, these females will dig maternity dens in the snow and will remain there throughout the winter, both before and after the birth of their cubs. This process results in about six months of fasting, where the female bears have to keep themselves and their cubs alive, depleting their own calcium and calorie reserves. Despite this, their bones remain strong and dense.Physiologists Alanda Lennox and Allen Goodship found an explanation for this paradox in 2008. They discovered that pregnant bears were able to increase the density of their bones before they started to build their dens. In addition, six months later, when they finally emerged from the den with their cubs, there was no evidence of significant loss of bone density. Hibernating brown bears do not have this capacity and must therefore resort to major bone reformation in the following spring. If the mechanism of bone remodelling in polar bears can be understood, many bedridden humans, and even astronauts, could potentially benefit.The medical benefits of the polar bear for humanity certainly have their importance in our conservation efforts, but these should not be the only factors taken into consideration. We tend to want to protect animals we think are intelligent and possess emotions, such as elephants and primates. Bears, on the other hand, seem to be perceived as stupid and in many cases violent. And yet anecdotal evidence from the field challenges those assumptions, suggesting for example that polar bears have good problem-solving abilities. A male bear called GoGo in Tennoji Zoo, Osaka, has even been observed making use of a tool to manipulate his environment. The bear used a tree branch on multiple occasions to dislodge a piece of meat hung out of his reach. Problem-solving ability has also been witnessed in wild polar bears, although not as obviously as with GoGo. A calculated move by a male bear involved running and jumping onto barrels in an attempt to get to a photographer standing on a platform four metres high.In other studies, such as one by Alison Ames in 2008, polar bears showed deliberate and focussed manipulation. For example, Ames observed bears putting objects in piles and then knocking them over in what appeared to be a game. The study demonstrates that bears are capable of agile and thought-out behaviours. These examples suggest bears have greater creativity and problem-solving abilities than previously thought.As for emotions, while the evidence is once again anecdotal, many bears have been seen to hit out at ice and snow – seemingly out of frustration – when they have just missed out on a kill. Moreover, polar bears can form unusual relationships with other species, including playing with the dogs used to pull sleds in the Arctic. Remarkably, one hand-raised polar bear called Agee has formed a close relationship with her owner Mark Dumas to the point where they even swim together. This is even more astonishing since polar bears are known to actively hunt humans in the wild.If climate change were to lead to their extinction, this would mean not only the loss of potential breakthroughs in human medicine, but more importantly, the disappearance of an intelligent, majestic animal.Questions 1-7
Do the following statements agree with the information given in Reading Passage? In boxes 1-7 on your answer sheet, writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this1. Polar bears suffer from various health problems due to the build-up of fat under their skin.
2. The study done by Liu and his colleagues compared different groups of polar bears.
3. Liu and colleagues were the first researchers to compare polar bears and brown bears genetically.
4. Polar bears are able to control their levels of ‘bad’ cholesterol by genetic means.
5. Female polar bears are able to survive for about six months without food.
6. It was found that the bones of female polar bears were very weak when they came out of their dens in spring.
7. The polar bear’s mechanism for increasing bone density could also be used by people one day.Questions 8-13
Complete the table below. Choose ONE WORD ONLY from the passage for each answer. Write your answers in boxes 8-13 on your answer sheet.Reasons why polar bears should be protected
People think of bears as unintelligent and (8) ……………………However, this may not be correct. For example:
• In Tennoji Zoo, a bear has been seen using a branch as a (9) …………………… This allowed him to knock down some (10) ……………………
• A wild polar bear worked out a method of reaching a platform where a (11) …………… was located.
• Polar bears have displayed behaviour such as conscious manipulation of objects and activity similar to a (12) ……………………….Bears may also display emotions. For example:
• They may make movements suggesting (13) ……………… if disappointed when hunting.The Step Pyramid of DjoserA The pyramids are the most famous monuments of ancient Egypt and still hold enormous interest for people in the present day. These grand, impressive tributes to the memory of the Egyptian kings have become linked with the country even though other cultures, such as the Chinese and Mayan, also built pyramids. The evolution of the pyramid form has been written and argued about for centuries. However, there is no question that, as far as Egypt is concerned, it began with one monument to one king designed by one brilliant architect: the Step Pyramid of Djoser at Saqqara.B Djoser was the first king of the Third Dynasty of Egypt and the first to build in stone. Prior to Djoser’s reign, tombs were rectangular monuments made of dried clay brick, which covered underground passages where the deceased person was buried. For reasons which remain unclear, Djoser’s main official, whose name was Imhotep, conceived of building a taller, more impressive tomb for his king by stacking stone slabs on top of one another, progressively making them smaller, to form the shape now known as the Step Pyramid. Djoser is thought to have reigned for 19 years, but some historians and scholars attribute a much longer time for his rule, owing to the number and size of the monuments he built.C The Step Pyramid has been thoroughly examined and investigated over the last century, and it is now known that the building process went through many different stages. Historian Marc Van de Mieroop comments on this, writing ‘Much experimentation was involved, which is especially clear in the construction of the pyramid in the center of the complex. It had several plans … before it became the first Step Pyramid in history, piling six levels on top of one another … The weight of the enormous mass was a challenge for the builders, who placed the stones at an inward incline in order to prevent the monument breaking up.’D When finally completed, the Step Pyramid rose 62 meters high and was the tallest structure of its time. The complex in which it was built was the size of a city in ancient Egypt and included a temple, courtyards, shrines, and living quarters for the priests. It covered a region of 16 hectares and was surrounded by a wall 10.5 meters high. The wall had 13 false doors cut into it with only one true entrance cut into the south-east corner; the entire wall was then ringed by a trench 750 meters long and 40 meters wide. The false doors and the trench were incorporated into the complex to discourage unwanted visitors. If someone wished to enter, he or she would have needed to know in advance how to find the location of the true opening in the wall. Djoser was so proud of his accomplishment that he broke the tradition of having only his own name on the monument and had Imhotep’s name carved on it as well.E The burial chamber of the tomb, where the king’s body was laid to rest, was dug beneath the base of the pyramid, surrounded by a vast maze of long tunnels that had rooms off them to discourage robbers. One of the most mysterious discoveries found inside the pyramid was a large number of stone vessels. Over 40,000 of these vessels, of various forms and shapes, were discovered in storerooms off the pyramid’s underground passages. They are inscribed with the names of rulers from the First and Second Dynasties of Egypt and made from different kinds of stone. There is no agreement among scholars and archaeologists on why the vessels were placed in the tomb of Djoser or what they were supposed to represent. The archaeologist Jean-Philippe Lauer, who excavated most of the pyramid and complex, believes they were originally stored and then given a ‘proper burial’ by Djoser in his pyramid to honor his predecessors. There are other historians, however, who claim the vessels were dumped into the shafts as yet another attempt to prevent grave robbers from getting to the king’s burial chamber.F Unfortunately, all of the precautions and intricate design of the underground network did not prevent ancient robbers from finding a way in. Djoser’s grave goods, and even his body, were stolen at some point in the past and all archaeologists found were a small number of his valuables overlooked by the thieves. There was enough left throughout the pyramid and its complex, however, to astonish and amaze the archaeologists who excavated it.G Egyptologist Miroslav Verner writes, ‘Few monuments hold a place in human history as significant as that of the Step Pyramid in Saqqara. It can be said without exaggeration that this pyramid complex constitutes a milestone in the evolution of monumental stone architecture in Egypt and in the world as a whole.’ The Step Pyramid was a revolutionary advance in architecture and became the archetype which all the other great pyramid builders of Egypt would follow.Questions 14-20
Reading Passage 2 has seven paragraphs, A-G. Choose the correct heading for each paragraph from the list of headings below. Write the correct number, i-ix, in boxes 14-20 on your answer sheet.List of Headings
i The areas and artefacts within the pyramid itself
ii A difficult task for those involved
iii A king who saved his people
iv A single certainty among other less definite facts
v An overview of the external buildings and areas
vi A pyramid design that others copied
vii An idea for changing the design of burial structures
viii An incredible experience despite the few remains ix The answers to some unexpected questions14. Paragraph A
15. Paragraph B
16. Paragraph C
17. Paragraph D
18. Paragraph E
19. Paragraph F
20. Paragraph GQuestions 21-24
Complete the notes below. Choose ONE WORD ONLY from the passage for each answer. Write your answers in boxes 21-24 on your answer sheet.The Step Pyramid of DjoserThe complex that includes the Step Pyramid and its surroundings is considered to be as big as an Egyptian (21) ……………… of the past. The area outside the pyramid included accommodation that was occupied by (22) …………………, long with many other buildings and features. A wall ran around the outside of the complex and a number of false entrances were built into this. In addition, a long (23) …………….. encircled the wall. As a result, any visitors who had not been invited were cleverly prevented from entering the pyramid grounds unless they knew the (24) ………………….. of the real entrance.Questions 25-26
Choose TWO letters, A-E. Write the correct letters in boxes 25 and 26 on your answer sheet.Which TWO of the following points does the writer make about King Djoser?
A Initially he had to be persuaded to build in stone rather than clay.
B There is disagreement concerning the length of his reign.
C He failed to appreciate Imhotep’s part in the design of the Step Pyramid.
D A few of his possessions were still in his tomb when archaeologists found it.
E He criticised the design and construction of other pyramids in Egypt.The future of workAccording to a leading business consultancy, 3-14% of the global workforce will need to switch to a different occupation within the next 10-15 years, and all workers will need to adapt as their occupations evolve alongside increasingly capable machines. Automation – or ‘embodied artificial intelligence’ (AI) – is one aspect of the disruptive effects of technology on the labour market. ‘Disembodied AI’, like the algorithms running in our smartphones, is another.Dr Stella Pachidi from Cambridge Judge Business School believes that some of the most fundamental changes are happening as a result of the ‘algorithmication’ of jobs that are dependent on data rather than on production – the so-called knowledge economy. Algorithms are capable of learning from data to undertake tasks that previously needed human judgement, such as reading legal contracts, analysing medical scans and gathering market intelligence.‘In many cases, they can outperform humans,’ says Pachidi. ‘Organisations are attracted to using algorithms because they want to make choices based on what they consider is “perfect information”, as well as to reduce costs and enhance productivity.’‘But these enhancements are not without consequences,’ says Pachidi. ‘If routine cognitive tasks are taken over by AI, how do professions develop their future experts?’ she asks. ‘One way of learning about a job is “legitimate peripheral participation” – a novice stands next to experts and learns by observation. If this isn’t happening, then you need to find new ways to learn.’Another issue is the extent to which the technology influences or even controls the workforce. For over two years, Pachidi monitored a telecommunications company. ‘The way telecoms salespeople work is through personal and frequent contact with clients, using the benefit of experience to assess a situation and reach a decision. However, the company had started using a[n] … algorithm that defined when account managers should contact certain customers about which kinds of campaigns and what to offer them.’The algorithm – usually built by external designers – often becomes the keeper of knowledge, she explains. In cases like this, Pachidi believes, a short-sighted view begins to creep into working practices whereby workers learn through the ‘algorithm’s eyes’ and become dependent on its instructions. Alternative explorations – where experimentation and human instinct lead to progress and new ideas – are effectively discouraged.Pachidi and colleagues even observed people developing strategies to make the algorithm work to their own advantage. ‘We are seeing cases where workers feed the algorithm with false data to reach their targets,’ she reports.It’s scenarios like these that many researchers are working to avoid. Their objective is to make AI technologies more trustworthy and transparent, so that organisations and individuals understand how AI decisions are made. In the meantime, says Pachidi, ‘We need to make sure we fully understand the dilemmas that this new world raises regarding expertise, occupational boundaries and control.’Economist Professor Hamish Low believes that the future of work will involve major transitions across the whole life course for everyone: ‘The traditional trajectory of full-time education followed by full-time work followed by a pensioned retirement is a thing of the past,’ says Low. Instead, he envisages a multistage employment life: one where retraining happens across the life course, and where multiple jobs and no job happen by choice at different stages.On the subject of job losses, Low believes the predictions are founded on a fallacy: ‘It assumes that the number of jobs is fixed. If in 30 years, half of 100 jobs are being carried out by robots, that doesn’t mean we are left with just 50 jobs for humans. The number of jobs will increase: we would expect there to be 150 jobs.’Dr Ewan McGaughey, at Cambridge’s Centre for Business Research and King’s College London, agrees that ‘apocalyptic’ views about the future of work are misguided. ‘It’s the laws that restrict the supply of capital to the job market, not the advent of new technologies that causes unemployment.’His recently published research answers the question of whether automation, AI and robotics will mean a ‘jobless future’ by looking at the causes of unemployment. ‘History is clear that change can mean redundancies. But social policies can tackle this through retraining and redeployment.’He adds: ‘If there is going to be change to jobs as a result of AI and robotics then I’d like to see governments seizing the opportunity to improve policy to enforce good job security. We can “reprogramme” the law to prepare for a fairer future of work and leisure.’ McGaughey’s findings are a call to arms to leaders of organisations, governments and banks to pre-empt the coming changes with bold new policies that guarantee full employment, fair incomes and a thriving economic democracy.‘The promises of these new technologies are astounding. They deliver humankind the capacity to live in a way that nobody could have once imagined,’ he adds. ‘Just as the industrial revolution brought people past subsistence agriculture, and the corporate revolution enabled mass production, a third revolution has been pronounced. But it will not only be one of technology. The next revolution will be social.’Questions 27-30
Choose the correct letter, A, B, C or D.27. The first paragraph tells us about
A the kinds of jobs that will be most affected by the growth of Al.
B the extent to which Al will alter the nature of the work that people do.
C the proportion of the world’s labour force who will have jobs in Al in the future.
D the difference between ways that embodied and disembodied Al will impact on workers.28. According to the second paragraph, what is Stella Pachidi’s view of the ‘knowledge economy’?
A It is having an influence on the number of jobs available.
B It is changing people’s attitudes towards their occupations.
C It is the main reason why the production sector is declining.
D It is a key factor driving current developments in the workplace.29. What did Pachidi observe at the telecommunications company?
A staff disagreeing with the recommendations of Al
B staff feeling resentful about the intrusion of Al in their work
C staff making sure that Al produces the results that they want
D staff allowing Al to carry out tasks they ought to do themselves30. In his recently published research, Ewan McGaughey
A challenges the idea that redundancy is a negative thing.
B shows the profound effect of mass unemployment on society.
C highlights some differences between past and future job losses.
D illustrates how changes in the job market can be successfully handled.Questions 31-34
Complete the summary using the list of words, A-G, below. Write the correct letter, A-G, in boxes 31-34 on your answer sheet.The ‘algorithmication’ of jobsStella Pachidi of Cambridge Judge Business School has been focusing on the ‘algorithmication’ of jobs which rely not on production but on (31) …………………..While monitoring a telecommunications company, Pachidi observed a growing (32) ……………………. on the recommendations made by Al, as workers begin to learn through the ‘algorithm’s eyes’. Meanwhile, staff are deterred from experimenting and using their own (33) ………………. and are therefore prevented from achieving innovation. To avoid the kind of situations which Pachidi observed, researchers are trying to make Al’s decision-making process easier to comprehend, and to increase users’ (34) …………………. with regard to the technology.A pressure
B satisfaction
C intuition
D promotion
E reliance
F confidence
G informationQuestions 35-40
Look at the following statements (Questions 35-40) and the list of people below. Match each statement with the correct person, A, B or C. Write the correct letter, A, B or C, in boxes 35-40 on your answer sheet. NB You may use any letter more than once.35. Greater levels of automation will not result in lower employment.
36. There are several reasons why Al is appealing to businesses.
37. Al’s potential to transform people’s lives has parallels with major cultural shifts which occurred in previous eras.
38. It is important to be aware of the range of problems that Al causes.
39. People are going to follow a less conventional career path than in the past.
40. Authorities should take measures to ensure that there will be adequately paid work for everyone.List of people
A Stella Pachidi
B Hamish Low
C Ewan McGaughey


1. False
2. False
3. Not given
4. True
5. True
6. False
7. True
8. Violent
9. Tool
10. Meat
11. Photographer
12. Game
13. Frustration
14. iv
15. vii
16. ii
17. v
18. i
19. viii
20. vi
21. City
22. Priests
23. Trench
24. Location
25. B, D
26. B, D
27. B
28. D
29. C
30. D
31. G
32. E
33. C
34. F
35. B
36. A
37. C
38. A
39. B
40. C


The return of the huarangoThe south coast of Peru is a narrow, 2,000-kilometre-long strip of desert squeezed between the Andes and the Pacific Ocean. It is also one of the most fragile ecosystems on Earth. It hardly ever rains there, and the only year-round source of water is located tens of metres below the surface. This is why the huarango tree is so suited to life there: it has the longest roots of any tree in the world. They stretch down 50-80 metres and, as well as sucking up water for the tree, they bring it into the higher subsoil, creating a water source for other plant life.Dr David Beresford-Jones, archaeobotanist at Cambridge University, has been studying the role of the huarango tree in landscape change in the Lower lea Valley in southern Peru. He believes the huarango was key to the ancient people’s diet and, because it could reach deep water sources, it allowed local people to withstand years of drought when their other crops failed. But over the centuries huarango trees were gradually replaced with crops. Cutting down native woodland leads to erosion, as there is nothing to keep the soil in place. So when the huarangos go, the land turns into a desert. Nothing grows at all in the Lower lea Valley now.For centuries the huarango tree was vital to the people of the neighbouring Middle lea Valley too. They grew vegetables under it and ate products made from its seed pods. Its leaves and bark were used for herbal remedies, while its branches were used for charcoal for cooking and heating, and its trunk was used to build houses. But now it is disappearing rapidly. The majority of the huarango forests in the valley have already been cleared for fuel and agriculture – initially, these were smallholdings, but now they’re huge farms producing crops for the international market.‘Of the forests that were here 1,000 years ago, 99 per cent have already gone,’ says botanist Oliver Whaley from Kew Gardens in London, who, together with ethnobotanist Dr William Milliken, is rumiing a pioneering project to protect and restore the rapidly disappearing habitat. In order to succeed, Whaley needs to get the local people on board, and that has meant overcoming local prejudices. ‘Increasingly aspirational communities think that if you plant food trees in your home or street, it shows you are poor, and still need to grow your own food,’ he says. In order to stop the Middle lea Valley going the same way as the Lower lea Valley, Whaley is encouraging locals to love the huarangos again. ‘It’s a process of cultural resuscitation,’ he says. He has already set up a huarango festival to reinstate a sense of pride in their eco-heritage, and has helped local schoolchildren plant thousands of trees.‘In order to get people interested in habitat restoration, you need to plant a tree that is useful to them,’ says Whaley. So, he has been working with local families to attempt to create a sustainable income from the huarangos by turning their products into foodstuffs. ‘Boil up the beans and you get this thick brown syrup like molasses. You can also use it in drinks, soups or stews.’ The pods can be ground into flour to make cakes, and the seeds roasted into a sweet, chocolatey ‘coffee’. ‘It’s packed full of vitamins and minerals,’ Whaley says.And some farmers are already planting huarangos. Alberto Benevides, owner of lea Valley’s only certified organic farm, which Whaley helped set up, has been planting the tree for 13 years. He produces syrup and flour, and sells these products at an organic farmers’ market in Lima. His farm is relatively small and doesn’t yet provide him with enough to live on, but he hopes this will change. ‘The organic market is growing rapidly in Peru,’ Benevides says. ‘I am investing in the future.’But even if Whaley can convince the local people to fall in love with the huarango again, there is still the threat of the larger farms. Some of these cut across the forests and break up the corridors that allow the essential movement of mammals, birds and pollen up and down the narrow forest strip. In the hope of counteracting this, he’s persuading farmers to let him plant forest corridors on their land. He believes the extra woodland will also benefit the farms by reducing their water usage through a lowering of evaporation and providing a refuge for bio-control insects.‘If we can record biodiversity and see how it all works, then we’re in a good position to move on from there. Desert habitats can reduce down to very little,’ Whaley explains. ‘It’s not like a rainforest that needs to have this huge expanse. Life has always been confined to corridors and islands here. If you just have a few trees left, the population can grow up quickly because it’s used to exploiting water when it arrives.’ He sees his project as a model that has the potential to be rolled out across other arid areas around the world. ‘If we can do it here, in the most fragile system on Earth, then that’s a real message of hope for lots of places, including Africa, where there is drought and they just can’t afford to wait for rain.’Questions 1-5
Complete the notes below. Choose ONE WORD ONLY from the passage for each answer.The importance of the huarango tree
• its roots can extend as far as 80 metres into the soil
• can access (1) …………….. deep below the surface
• was a crucial part of local inhabitants’ (2) …………… a long time ago
• helped people to survive periods of (3) ………………..
• prevents (4) ………………. of the soil
• prevents land from becoming a (5) ……………..Questions 6-8
Complete the table below. Choose NO MORE THAN TWO WORDS from the passage for each answer.Questions 9-13
Do the following statements agree with the information given in Reading Passage 1? In boxes 9-13, writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this9. Local families have told Whaley about some traditional uses of huarango products.
10. Farmer Alberto Benevides is now making a good profit from growing huarangos.
11. Whaley needs the co-operation of farmers to help preserve the area’s wildlife.
12. For Whaley’s project to succeed, it needs to be extended over a very large area.
13. Whaley has plans to go to Africa to set up a similar project.


1. Water
2. Diet
3. Drought
4. Erosion
5. Desert
6. (its/ huarango/ the) branches
7. leaves (and) bark
8. (its/ huarango/ the) trunk
9. Not given
10. False
11. True
12. False
13. Not given
14. Not given
15. False
16. True
17. False
18. False
19. True
20. Words
21. Finger
22. Direction
23. Commands
24. Fires
25. Technology
26. Award
27. D
28. E
29. F
30. H
31. B
32. C
33. D
34. B
35. Yes
36. Not given
37. No
38. Yes
39. Not given
40. D


Roman tunnelsThe Persians, who lived in present-day Iran, were one of the first civilizations to build tunnels that provided a reliable supply of water to human settlements in dry areas. In the early first millennium BCE, they introduced the qanat method of tunnel construction, which consisted of placing posts over a hill in a straight line, to ensure that the tunnel kept to its route, and then digging vertical shafts down into the ground at regular intervals. Underground, workers removed the earth from between the ends of the shafts, creating a tunnel. The excavated soil was taken up to the surface using the shafts, which also provided ventilation during the work. Once the tunnel was completed, it allowed water to flow from the top of a hillside down towards a canal, which supplied water for human use. Remarkably, some qanats built by the Persians 2,700 years ago are still in use today.They later passed on their knowledge to the Romans, who also used the qanat method to construct water-supply tunnels for agriculture. Roman qanat tunnels were constructed with vertical shafts dug at intervals of between 30 and 60 meters. The shafts were equipped with handholds and footholds to help those climbing in and out of them and were covered with a wooden or stone lid. To ensure that the shafts were vertical, Romans hung a plumb line from a rod placed across the top of each shaft and made sure that the weight at the end of it hung in the center of the shaft. Plumb lines were also used to measure the depth of the shaft and to determine the slope of the tunnel. The 5.6-kilometer-long Claudius tunnel, built in 41 CE to drain the Fucine Lake in central Italy, had shafts that were up to 122 meters deep, took 11 years to build and involved approximately 30,000 workers.By the 6th century BCE, a second method of tunnel construction appeared called the counter-excavation method, in which the tunnel was constructed from both ends. It was used to cut through high mountains when the qanat method was not a practical alternative. This method required greater planning and advanced knowledge of surveying, mathematics and geometry as both ends of a tunnel had to meet correctly at the center of the mountain. Adjustments to the direction of the tunnel also had to be made whenever builders encountered geological problems or when it deviated from its set path. They constantly checked the tunnel’s advancing direction, for example, by looking back at the light that penetrated through the tunnel mouth, and made corrections whenever necessary. Large deviations could happen, and they could result in one end of the tunnel not being usable. An inscription written on the side of a 428-meter tunnel, built by the Romans as part of the Saldae aqueduct system in modern-day Algeria, describes how the two teams of builders missed each other in the mountain and how the later construction of a lateral link between both corridors corrected the initial error.The Romans dug tunnels for their roads using the counter-excavation method, whenever they encountered obstacles such as hills or mountains that were too high for roads to pass over. An example is the 37-meter-long, 6-meter-high, Furlo Pass Tunnel built in Italy in 69-79 CE. Remarkably, a modern road still uses this tunnel today. Tunnels were also built for mineral extraction. Miners would locate a mineral vein and then pursue it with shafts and tunnels underground. Traces of such tunnels used to mine gold can still be found at the Dolaucothi mines in Wales. When the sole purpose of a tunnel was mineral extraction, construction required less planning, as the tunnel route was determined by the mineral vein.Roman tunnel projects were carefully planned and carried out. The length of time it took to construct a tunnel depended on the method being used and the type of rock being excavated. The qanat construction method was usually faster than the counter-excavation method as it was more straightforward. This was because the mountain could be excavated not only from the tunnel mouths but also from shafts. The type of rock could also influence construction times. When the rock was hard, the Romans employed a technique called fire quenching which consisted of heating the rock with fire, and then suddenly cooling it with cold water so that it would crack. Progress through hard rock could be very slow, and it was not uncommon for tunnels to take years, if not decades, to be built. Construction marks left on a Roman tunnel in Bologna show that the rate of advance through solid rock was 30 centimeters per day. In contrast, the rate of advance of the Claudius tunnel can be calculated at 1.4 meters per day. Most tunnels had inscriptions showing the names of patrons who ordered construction and sometimes the name of the architect. For example, the 1.4-kilometer Cevlik tunnel in Turkey, built to divert the floodwater threatening the harbor of the ancient city of Seleuceia Pieria, had inscriptions on the entrance, still visible today, that also indicate that the tunnel was started in 69 CE and was completed in 81 CE.Questions 1-6
Label the diagram below. Choose ONE WORD ONLY from the passage for each answer.
Questions 7-10
Do the following statements agree with the information given in Reading Passage? In boxes 7-10 on your answer sheet, writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this7. The counter-excavation method completely replaced the qanat method in the 6th century BCE.
8. Only experienced builders were employed to construct a tunnel using the counter-excavation method.
9. The information about a problem that occurred during the construction of the Saldae aqueduct system was found in an ancient book.
10. The mistake made by the builders of the Saldae aqueduct system was that the two parts of the tunnel failed to meet.Questions 11-13
Answer the questions below. Choose NO MORE THAN TWO WORDS from the passage for each answer. Write your answers in boxes 11-13 on your answer sheet.11. What type of mineral were the Dolaucothi mines in Wales built to extract?
12. In addition to the patron, whose name might be carved onto a tunnel?
13. What part of Seleuceia Pieria was the Qevlik tunnel built to protect?Changes in reading habitsLook around on your next plane trip. The iPad is the new pacifier for babies and toddlers. Younger school-aged children read stories on smartphones; older kids don’t read at all, but hunch over video games. Parents and other passengers read on tablets or skim a flotilla of email and news feeds. Unbeknown to most of us, an invisible, game-changing transformation links everyone in this picture: the neuronal circuit that underlies the brain’s ability to read is subtly, rapidly changing and this has implications for everyone from the pre-reading toddler to the expert adult.As work in neurosciences indicates, the acquisition of literacy necessitated a new circuit in our species’ brain more than 6,000 years ago. That circuit evolved from a very simple mechanism for decoding basic information, like the number of goats in one’s herd, to the present, highly elaborated reading brain. My research depicts how the present reading brain enables the development of some of our most important intellectual and affective processes: internalized knowledge, analogical reasoning, and inference; perspective-taking and empathy; critical analysis and the generation of insight. Research surfacing in many parts of the world now cautions that each of these essential ‘deep reading’ processes may be under threat as we move into digital- based modes of reading.This is not a simple, binary issue of print versus digital reading and technological innovation. As MIT scholar Sherry Turkle has written, we do not err as a society when we innovate but when we ignore what we disrupt or diminish while innovating. In this hinge moment between print and digital cultures, society needs to confront what is diminishing in the expert reading circuit, what our children and older students are not developing, and what we can do about it.We know from research that the reading circuit is not given to human beings through a genetic blueprint like vision or language; it needs an environment to develop. Further, it will adapt to that environment’s requirements – from different writing systems to the characteristics of whatever medium is used. If the dominant medium advantages processes that are fast, multi-task oriented and well-suited for large volumes of information, like the current digital medium, so will the reading circuit. As UCLA psychologist Patricia Greenfield writes, the result is that less attention and time will be allocated to slower, time-demanding deep reading processes.Increasing reports from educators and from researchers in psychology and the humanities bear this out. English literature scholar and teacher Mark Edmundson describes how many college students actively avoid the classic literature of the 19th and 20th centuries in favour of something simpler as they no longer have the patience to read longer, denser, more difficult texts. We should be less concerned with students’ ‘cognitive impatience’, however, than by what may underlie it: the potential inability of large numbers of students to read with a level of critical analysis sufficient to comprehend the complexity of thought and argument found in more demanding texts.Multiple studies show that digital screen use may be causing a variety of troubling downstream effects on reading comprehension in older high school and college students. In Stavanger, Norway, psychologist Anne Mangen and her colleagues studied how high school students comprehend the same material in different mediums. Mangen’s group asked subjects questions about a short story whose plot had universal student appeal; half of the students read the story on a tablet, the other half in paperback. Results indicated that students who read on print were superior in their comprehension to screen-reading peers, particularly in their ability to sequence detail and reconstruct the plot in chronological order.Ziming Liu from San Jose State University has conducted a series of studies which indicate that the ‘new norm’ in reading is skimming, involving word-spotting and browsing through the text. Many readers now use a pattern when reading in which they sample the first line and then word- spot through the rest of the text. When the reading brain skims like this, it reduces time allocated to deep reading processes. In other words, we don’t have time to grasp complexity, to understand another’s feelings, to perceive beauty, and to create thoughts of the reader’s own.The possibility that critical analysis, empathy and other deep reading processes could become the unintended ‘collateral damage’ of our digital culture is not a straightforward binary issue about print versus digital reading. It is about how we all have begun to read on various mediums and how that changes not only what we read, but also the purposes for which we read. Nor is it only about the young. The subtle atrophy of critical analysis and empathy affects us all equally. It affects our ability to navigate a constant bombardment of information. It incentivizes a retreat to the most familiar stores of unchecked information, which require and receive no analysis, leaving us susceptible to false information and irrational ideas.There’s an old rule in neuroscience that does not alter with age: use it or lose it. It is a very hopeful principle when applied to critical thought in the reading brain because it implies choice. The story of the changing reading brain is hardly finished. We possess both the science and the technology to identify and redress the changes in how we read before they become entrenched. If we work to understand exactly what we will lose, alongside the extraordinary new capacities that the digital world has brought us, there is as much reason for excitement as caution.Questions 14-17
Choose the correct letter, A, B, C or D.14. What is the writer’s main point in the first paragraph?
A Our use of technology is having a hidden effect on us.
B Technology can be used to help youngsters to read.
C Travellers should be encouraged to use technology on planes.
D Playing games is a more popular use of technology than reading.15. What main point does Sherry Turkle make about innovation?
A Technological innovation has led to a reduction in print reading.
B We should pay attention to what might be lost when innovation occurs.
C We should encourage more young people to become involved in innovation.
D There is a difference between developing products and developing ideas.16. What point is the writer making in the fourth paragraph?
A Humans have an inborn ability to read and write.
B Reading can be done using many different mediums.
C Writing systems make unexpected demands on the brain.
D Some brain circuits adjust to whatever is required of them.17. According to Mark Edmundson, the attitude of college students
A has changed the way he teaches.
B has influenced what they select to read.
C does not worry him as much as it does others.
D does not match the views of the general public.Questions 18-22
Complete the summary using the list of words, A-H, below. Write the correct letter, A-H, in boxes 18-22 on your answer sheet.Studies on digital screen useThere have been many studies on digital screen use, showing some (18) ………………… trends. Psychologist Anne Mangen gave high-school students a short story to read, half using digital and half using print mediums. Her team then used a question-and-answer technique to find out how (19) ………………… each group’s understanding of the plot was. The findings showed a clear pattern in the responses, with those who read screens finding the order of information (20) ………………….. to recall. Studies by Ziming Liu show that students are tending to read (21) …………………. words and phrases in a text to save time. This approach, she says, gives the reader a superficial understanding of the (22) …………….content of material, leaving no time for thought.A fast
B isolated
C emotional
D worrying
E many
F hard
G combined
H thoroughQuestions 23-26
Do the following statements agree with the views of the writer in Reading Passage? In boxes 23-26 on your answer sheet, writeYES  if the statement agrees with the views of the writer
NO  if the statement contradicts the views of the writer
NOT GIVEN  if it is impossible to say what the writer thinks about this23. The medium we use to read can affect our choice of reading content.
24. Some age groups are more likely to lose their complex reading skills than others.
25. False information has become more widespread in today’s digital era.
26. We still have opportunities to rectify the problems that technology is presenting.Attitudes towards Artificial IntelligenceA Artificial intelligence (AI) can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences. Many decisions in our lives require a good forecast, and AI is almost always better at forecasting than we are. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong. If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.B Take the case of Watson for Oncology, one of technology giant IBM’s supercomputer programs. Their attempt to promote this program to cancer doctors was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. But when doctors first interacted with Watson, they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much point in Watson’s recommendations. The supercomputer was simply telling them what they already knew, and these recommendations did not change the actual treatment. On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more suspicion and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.C This is just one example of people’s lack of confidence in AI and their reluctance to accept what AI has to offer. Trust in other people is often based on our understanding of how others think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to comprehend. And interacting with something we don’t understand can cause anxiety and give us a sense that we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes wrong. Embarrassing AI failures receive a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.D Feelings about AI run deep. In a recent experiment, people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life. It was found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded. This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as “confirmation bias”. As AI is represented more and more in media and entertainment, it could lead to a society split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.E Fortunately, we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s opinions about the technology, as was found in the study mentioned above. Evidence also suggests the more you use other technologies such as the internet, the more you trust them. Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve. Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures. A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.F Research suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience. For example, one study showed that when people were allowed the freedom to slightly modify an algorithm, they felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future. We don’t need to understand the intricate inner workings of AI systems, but if people are given a degree of responsibility for how they are implemented, they will be more willing to accept AI into their lives.Questions 27-32
Reading Passage 3 has six sections, A-F. Choose the correct heading for each section from the list of headings below. Write the correct number, i-viii, in boxes 27-32 on your answer sheet.List of Headings
i An increasing divergence of attitudes towards Al
ii Reasons why we have more faith in human judgement than in Al
iii The superiority of Al projections over those made by humans
iv The process by which Al can help us make good decisions
v The advantages of involving users in Al processes
vi Widespread distrust of an Al innovation
vii Encouraging openness about how Al functions
viii A surprisingly successful Al application27. Section A
28. Section B
29. Section C
30. Section D
31. Section E
32. Section FQuestions 33-35
Choose the correct letter, A, B, C or D.33. What is the writer doing in Section A?
A providing a solution to a concern
B justifying an opinion about an issue
C highlighting the existence of a problem
D explaining the reasons for a phenomenon34. According to Section C, why might some people be reluctant to accept Al?
A They are afraid it will replace humans in decision-making jobs.
B Its complexity makes them feel that they are at a disadvantage.
C They would rather wait for the technology to be tested over a period of time.
D Misunderstandings about how it works make it seem more challenging than it is.35. What does the writer say about the media in Section C of the text?
A It leads the public to be mistrustful of Al.
B It devotes an excessive amount of attention to Al.
C Its reports of incidents involving Al are often inaccurate.
D It gives the impression that Al failures are due to designer error.Questions 36-40
Do the following statements agree with the claims of the writer in Reading Passage? In boxes 36-40 on your answer sheet, writeYES  if the statement agrees with the claims of the writer
NO  if the statement contradicts the claims of the writer
NOT GIVEN  if it is impossible to say what the writer thinks about this36. Subjective depictions of Al in sci-fi films make people change their opinions about automation.
37. Portrayals of Al in media and entertainment are likely to become more positive.
38. Rejection of the possibilities of Al may have a negative effect on many people’s lives.
39. Familiarity with Al has very little impact on people’s attitudes to the technology.
40. Al applications which users are able to modify are more likely to gain consumer approval.


1. Posts
2. Canal
3. Ventilation
4. Lid
5. Weight
6. Climbing
7. False
8. Not given
9. False
10. True
11. Gold
12. (the) architects’ (name)
13. (the) harbor/ harbour
14. A
15. B
16. D
17. B
18. D
19. H
20. F
21. B
22. C
23. Yes
24. No
25. Not given
26. Yes
27. iii
28. vi
29. ii
30. i
31. vii
32. v
33. C
34. B
35. A
36. No
37. Not given
38. Yes
39. No
40. Yes


Nutmeg – a valuable spiceThe nutmeg tree, Myristica fragrans, is a large evergreen tree native to Southeast Asia. Until the late 18th century, it only grew in one place in the world: a small group of islands in the Banda Sea, part of the Moluccas – or Spice Islands – in northeastern Indonesia. The tree is thickly branched with dense foliage of tough, dark green oval leaves, and produces small, yellow, bell-shaped flowers and pale yellow pear-shaped fruits. The fruit is encased in a fleshy husk. When the fruit is ripe, this husk splits into two halves along a ridge running the length of the fruit. Inside is a purple-brown shiny seed, 2-3 cm long by about 2cm across, surrounded by a lacy red or crimson covering called an ‘aril’. These are the sources of the two spices nutmeg and mace, the former being produced from the dried seed and the latter from the aril.Nutmeg was a highly prized and costly ingredient in European cuisine in the Middle Ages, and was used as a flavouring, medicinal, and preservative agent. Throughout this period, the Arabs were the exclusive importers of the spice to Europe. They sold nutmeg for high prices to merchants based in Venice, but they never revealed the exact location of the source of this extremely valuable commodity. The Arab-Venetian dominance of the trade finally ended in 1512, when the Portuguese reached the Banda Islands and began exploiting its precious resources.Always in danger of competition from neighbouring Spain, the Portuguese began subcontracting their spice distribution to Dutch traders. Profits began to flow into the Netherlands, and the Dutch commercial fleet swiftly grew into one of the largest in the world. The Dutch quietly gained control of most of the shipping and trading of spices in Northern Europe. Then, in 1580, Portugal fell under Spanish rule, and by the end of the 16th century the Dutch found themselves locked out of the market. As prices for pepper, nutmeg, and other spices soared across Europe, they decided to fight back.In 1602, Dutch merchants founded the VOC, a trading corporation better known as the Dutch East India Company. By 1617, the VOC was the richest commercial operation in the world. The company had 50,000 employees worldwide, with a private army of 30,000 men and a fleet of 200 ships. At the same time, thousands of people across Europe were dying of the plague, a highly contagious and deadly disease. Doctors were desperate for a way to stop the spread of this disease, and they decided nutmeg held the cure. Everybody wanted nutmeg, and many were willing to spare no expense to have it. Nutmeg bought for a few pennies in Indonesia could be sold for 68,000 times its original cost on the streets of London. The only problem was the short supply. And that’s where the Dutch found their opportunity.The Banda Islands were ruled by local sultans who insisted on maintaining a neutral trading policy towards foreign powers. This allowed them to avoid the presence of Portuguese or Spanish troops on their soil, but it also left them unprotected from other invaders. In 1621, the Dutch arrived and took over. Once securely in control of the Bandas, the Dutch went to work protecting their new investment. They concentrated all nutmeg production into a few easily guarded areas, uprooting and destroying any trees outside the plantation zones. Anyone caught growing a nutmeg seedling or carrying seeds without the proper authority was severely punished. In addition, all exported nutmeg was covered with lime to make sure there was no chance a fertile seed which could be grown elsewhere would leave the islands. There was only one obstacle to Dutch domination. One of the Banda Islands, a sliver of land called Run, only 3 Ion long by less than 1 km wide, was under the control of the British. After decades of fighting for control of this tiny island, the Dutch and British arrived at a compromise settlement, the Treaty of Breda, in 1667. Intent on securing their hold over every nutmeg-producing island, the Dutch offered a trade: if the British would give them the island of Run, they would in turn give Britain a distant and much less valuable island in North America. The British agreed. That other island was Manhattan, which is how New Amsterdam became New York. The Dutch now had a monopoly over the nutmeg trade which would last for another century.Then, in 1770, a Frenchman named Pierre Poivre successfully smuggled nutmeg plants to safety in Mauritius, an island off the coast of Africa. Some of these were later exported to the Caribbean where they thrived, especially on the island of Grenada. Next, in 1778, a volcanic eruption in the Banda region caused a tsunami that wiped out half the nutmeg groves. Finally, in 1809, the British returned to Indonesia and seized the Banda Islands by force. They returned the islands to the Dutch in 1817, but not before transplanting hundreds of nutmeg seedlings to plantations in several locations across southern Asia. The Dutch nutmeg monopoly was over.Today, nutmeg is grown in Indonesia, the Caribbean, India, Malaysia, Papua New Guinea and Sri Lanka, and world nutmeg production is estimated to average between 10,000 and 12,000 tonnes per year.Questions 1-4
Complete the notes below. Write ONE WORD ONLY from the passage for each answer.The nutmeg tree and fruit
• The leaves of the tree are (1) ……………….. in shape
• The (2) ………………. surrounds the fruit and breaks open when the fruit is ripe
• The (3) ………………. is used to produce the spice nutmeg
• The covering known as the aril is used to produce (4) ………………Questions 5-7
Do the following statements agree with the information given in Reading Passage 1? In boxes 5-7, writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN if there is no information on this5. In the Middle Ages, most Europeans knew where nutmeg was grown.
6. The VOC was the world’s first major trading company.
7. Following the Treaty of Breda, the Dutch had control of all the islands where nutmeg grew.Questions 8-13
Complete the table below. Choose ONE WORD ONLY from the passage.


1. Oval
2. Husk
3. Seed
4. Mace
5. False
6. Not given
7. True
8. Arabs
9. Plague
10. Lime
11. Run
12. Mauritius
13. Tsunami
14. C
15. B
16. E
17. G
18. D
19. Human error
20. Car (-) sharing
21. Ownership
22. Mileage
23. C, D
24. C, D
25. A, E
26. A, E
27. A
28. C
29. C
30. D
31. A
32. B
33. E
34. A
35. D
36. E
37. B
38. (unique) expeditions
39. Uncontacted/ isolated
40. (land) surface

ALBERT EINSTEINAlbert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing.Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life.It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics.In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist.Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einstein’s granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Mileva’s son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.Questions 1-8
Do the following statements agree with the information given in the text? For questions 1-8, write:TRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this1. The general theory of relativity is a very important theory in modern physics.
2. Einstein had such difficulty with language that those around him thought he would never learn how to speak.
3. It seemed to Einstein that nothing could be pushing the needle of the compass around except the wind.
4. Einstein enjoyed the teaching methods in Switzerland.
5. Einstein taught himself how to play the violin.
6. His daughter died of schizophrenia when she was two.
7. The existence of a daughter only became known to the world between 1897 and 1903.
8. In 1933 Einstein moved to the United States where he became an American citizen.Questions 9-10
Complete the sentences below. Choose NO MORE THAN THREE WORDS from the text for each answer.He tried hard to understand how the needle could seem to move itself so that it always (9)……………….He often did not go to classes and used the time to study physics (10)…………………..or to play music.Questions 11-13
Choose the correct letter, A, B, C or D.11. The name of Einstein’s daughter
A was not chosen by him.
B is a mystery.
C is shared by his granddaughter.
D was discovered in a shoe box.12. His teachers would not recommend him for a university position because
A they did not think highly of him.
B they thought he was a Nazi.
C his wife was Serbian.
D he seldom skipped classes.13. The famous physicist Albert Einstein was of
A Swiss origin.
B Jewish origin.
C American origin.
D Austrian origin.DRINKING FILTERED WATERA The body is made up mainly of water. This means that the quality of water that we drink every day has an important effect on our health. Filtered water is healthier than tap water and some bottled water. This is because it is free of contaminants, that is, of substances that make it dirty or harmful. Substances that settle on the bottom of a glass of tap water and microorganisms that carry diseases (known as bacteria or germs) are examples of contaminants. Filtered water is also free of poisonous metals and chemicals that are common in tap water and even in some bottled water brands.B The authorities know that normal tap water is full of contaminants and they use chemicals, such as chlorine and bromine in order to disinfect it. But such chemicals are hardly safe. Indeed, their use in water is associated with many different conditions and they are particularly dangerous for children and pregnant women. For example, consuming bromine for a long time may result in low blood pressure, which may then bring about poisoning of the brain, heart, kidneys and liver. Filtered water is typically free of such water disinfectant chemicals.C Filtered water is also free of metals, such as mercury and lead. Mercury has ended up in our drinking water mainly because the dental mixtures used by dentists have not been disposed of safely for a long time. Scientists believe there is a connection between mercury in the water and many allergies and cancers as well as disorders, such as ADD, OCD, autism and depression.D Lead, on the other hand, typically finds its way to our drinking water due to pipe leaks. Of course, modern pipes are not made of lead but pipes in old houses usually are. Lead is a well-known carcinogen and is associated with pregnancy problems and birth defects. This is another reason why children and pregnant women must drink filtered water.E The benefits of water are well known. We all know, for example, that it helps to detoxify the body, So, the purer the water we drink, the easier it is for the body to rid itself of toxins. The result of drinking filtered water is that the body does not have to use as much of its energy on detoxification as it would when drinking unfiltered water. This means that drinking filtered water is good for our health in general. That is because the body can perform all of its functions much more easily and this results in improved metabolism, better weight management, improved joint lubrication as well as efficient skin hydration.F There are many different ways to filter water and each type of filter targets different contaminants. For example, activated carbon water filters are very good at taking chlorine out. Ozone water filters, on the other hand, are particularly effective at removing germs.G For this reason, it is very important to know exactly what is in the water that we drink so that we can decide what type of water filter to use. A Consumer Confidence Report (CCR) should be useful for this purpose. This is a certificate that is issued by public water suppliers every year, listing the contaminants present in the water. If you know what these contaminants are, then it is easier to decide which type of water filter to get.Questions 14-20
The text has seven paragraphs, A-G.Which paragraph contains the following information?14. a short summary of the main points of the text
15. a variety of methods used for water filtration
16. making it easier for the body to get rid of dangerous chemicals
17. finding out which contaminants your water filter should target
18. allergies caused by dangerous metals
19. a dangerous metal found in the plumbing of old buildings
20. chemicals of cleaning products that destroy bacteriaQuestions 21-26
Do the following statements agree with the information given in the text? For questions 21-26, write:TRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this21. The type of water you consume on a regular basis has a great impact on your overall health and wellness.
22. Filtered water typically contains water disinfectant chemicals.
23. Exposure to disinfectant chemicals is linked with poisoning of the vital organs.
24. Drinking tap water helps minimise your exposure to harmful elements.
25. People wearing artificial teeth are more likely to be contaminated.
26. People who are depressed often suffer from dehydration.SPEECH DYSFLUENCY AND POPULAR FILLERSA speech dysfluency is any of various breaks, irregularities or sound-filled pauses that we make when we are speaking, which are commonly known as fillers. These include words and sentences that are not finished, repeated phrases or syllables, instances of speakers correcting their own mistakes as they speak and “words” such as ‘huh’, ‘uh’, ‘erm’, ‘urn’, ‘hmm’, ‘err’, ‘like’, ‘you know’ and ‘well’.Fillers are parts of speech which are not generally recognised as meaningful and they include speech problems, such as stuttering (repeating the first consonant of some words). Fillers are normally avoided on television and films, but they occur quite regularly in everyday conversation, sometimes making up more than 20% of “words” in speech. But they can also be used as a pause for thought.Research in linguistics has shown that fillers change across cultures and that even the different English speaking nations use different fillers. For example, Americans use pauses such as ‘um’ or ’em’ whereas the British say ‘uh’ or ‘eh’. Spanish speakers say ‘ehhh’ and in Latin America (where they also speak Spanish) but not Spain, ‘este’ is used (normally meaning ‘this’).Recent linguistic research has suggested that the use of ‘uh’ and ‘um’ in English is connected to the speaker’s mental and emotional state. For example, while pausing to say ‘uh’ or ‘um’ the brain may be planning the use of future words. According to the University of Pennsylvania linguist Mark Liberman, ‘um’ generally comes before a longer or more important pause than ‘uh’. At least that’s what he used to think.Liberman has discovered that as Americans get older, they use ‘uh’ more than ‘um’ and that men use ‘uh’ more than women no matter their age. But the opposite is true of ‘um’. The young say ‘um’ more often than the old. And women say ‘um’ more often than men at every age. This was an unexpected result because scientists used to think that fillers had to do more with the amount of time a speaker pauses for, rather than with who the speaker is.Liberman mentioned his finding to fellow linguists in the Netherlands and this encouraged the group to look for a pattern outside American English. They studied British and Scottish English, German, Danish, Dutch and Norwegian and found that women and younger people said ‘um’ more than ‘uh’ in those languages as well.Their conclusion is that it is simply a case of language change in progress and that women and younger people are leading the change. And there is nothing strange about this. Women and young people normally are the typical pioneers of most language change. What is strange, however, is that ‘um’ is replacing ‘uh’ across at least two continents and five Germanic languages. Now this really is a mystery.The University of Edinburgh sociolinguist Josef Fruehwald may have an answer. In his view, ‘um’ and ‘uh’ are pretty much equivalent. The fact that young people and women prefer it is not significant. This often happens in language when there are two options. People start using one more often until the other is no longer an option. It’s just one of those things.As to how such a trend might have gone from one language to another, there is a simple explanation, according to Fruehwald. English is probably influencing the other languages. We all know that in many countries languages are constantly borrowing words and expressions of English into their own language so why not borrow fillers, too? Of course, we don’t know for a fact whether that’s actually what’s happening with ‘um’ but it is a likely story.Questions 27-34
Do the following statements agree with the information given in the text? For questions 27-34, writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this27. Fillers are usually expressed as pauses and probably have no linguistic meaning although they may have a purpose.
28. In general, fillers vary across cultures.
29. Fillers are uncommon in everyday language.
30. American men use ‘uh’ more than American women do.
31. Younger Spaniards say ‘ehhh’ more often than older Spaniards.
32. In the past linguists did not think that fillers are about the amount of time a speaker hesitates.
33. During a coffee break Liberman was chatting with a small group of researchers.
34. Fruehwald does not believe that there are age and gender differences related to ‘um’ and ‘uh’.Questions 35-40
Choose the correct letter, A, B, C or D.35. Fillers are not
A used to give the speaker time to think.
B phrases that are restated.
C used across cultures.
D popular with the media.36. It had originally seemed to Mark Liberman that
A ‘um’ was followed by a less significant pause than ‘uh’.
B ‘uh’ was followed by a shorter pause than ‘um’.
C ‘uh’ was followed by a longer pause than ‘um’.
D the use of ‘um’ meant the speaker was sensitive.37. Contrary to what linguists used to think, it is now believed that the choice of filler
A may have led to disagreements.
B depends on the characteristics of the speaker.
C has nothing to do with sex.
D only matters to older people.38. According to Liberman, it’s still a puzzle why
A a specific language change is so widely spread.
B the two fillers are comparable.
C we have two options.
D ‘um’ is preferred by women and young people.39. Concerning the normal changes that all languages go through as time goes by,
A old men are impossible to teach.
B men in general are very conservative.
C young men simply copy the speech of young women.
D women play a more important role than men.40. According to Fruehwald, the fact that ‘um’ is used more than ‘uh’
A proves that ‘um’ is less important.
B shows that young people have low standards.
C shows that they have different meanings.
D is just a coincidence.


1. True
2. True
3. False
4. False
5. NG
6. NG
7. False
8. NG
9. Pointed north
10. On his own
11. B
12. A
13. B
14. A
15. F
16. E
17. G
18. C
19. D
20. B
21. True
22. False
23. True
24. False
25. NG
26. NG
27. True
28. True
29. False
30. True
31. NG
32. False
33. NG
34. False
35. D
36. B
37. B
38. A
39. D
40. D



Daydreaming

Everyone daydreams sometimes. We sit or lie down, close our eyes and use our imagination to think about something that might happen in the future or could have happened in the past. Most daydreaming is pleasant. We would like the daydream to happen and we would be very happy if it did actually happen. We might daydream that we are in another person’s place, or doing something that we have always wanted to do, or that other people like or admire us much more than they normally do.Daydreams are not dreams, because we can only daydream if we are awake. Also, we choose what our daydreams will be about, which we cannot usually do with dreams. With many daydreams, we know that what we imagine is unlikely to happen. At least, if it does happen, it probably will not do so in the way we want it to. However, some daydreams are about things that are likely to happen. With these, our daydreams often help us to work out what we want to do, or how to do it to get the best results. So, these daydreams are helpful. We use our imagination to help us understand the world and other people.Daydreams can help people to be creative. People in creative or artistic careers, such as composers, novelists and filmmakers, develop new ideas through daydreaming. This is also true of research scientists and mathematicians. In fact, Albert Einstein said that imagination is more important than knowledge because knowledge is limited whereas imagination is not.Research in the 1980s showed that most daydreams are about ordinary, everyday events. It also showed that over 75% of workers in so-called ‘boring jobs’, such as lorry drivers and security guards, spend a lot of time daydreaming in order to make their time at work more interesting. Recent research has also shown that daydreaming has a positive effect on the brain. Experiments with MRI brain scans show that the parts of the brain linked with complex problem-solving are more active during daydreaming. Researchers conclude that daydreaming is an activity in which the brain consolidates learning. In this respect, daydreaming is the same as dreaming during sleep.Although there do seem to be many advantages with daydreaming, in many cultures it is considered a bad thing to do. One reason for this is that when you are daydreaming, you are not working. In the 19th century, for example, people who daydreamed a lot were judged to be lazy. This happened in particular when people started working in factories on assembly lines. When you work on an assembly line, all you do is one small task again and again, every time exactly the same. It is rather repetitive and, obviously, you cannot be creative. So many people decided that there was no benefit in daydreaming.Other people have said that daydreaming leads to ‘escapism’ and that this is not healthy, either. Escapist people spend a lot of time living in a dream world in which they are successful and popular, instead of trying to deal with the problems they face in the real world. Such people often seem to be unhappy and are unable or unwilling to improve their daily lives. Indeed, recent studies show that people who often daydream have fewer close friends than other people. In fact, they often do not have any close friends at all.



Questions 1-8
Do the following statements agree with the information given in the text? For questions 1-8, writeTRUE  if the statement agrees with the information
FALSE                  if the statement contradicts the information
NOT GIVEN  if there is no information on this1. People usually daydream when they are walking around.
2. Some people can daydream when they are asleep.
3. Some daydreams help us to be more successful in our lives.
4. Most lorry drivers daydream in their jobs to make them more interesting.
5. Factory workers daydream more than lorry drivers.
6. Daydreaming helps people to be creative.
7. Old people daydream more than young people.
8. Escapist people are generally very happy.Questions 9-10
Complete the sentences below. Choose NO MORE THAN THREE WORDS from the text for each answer.Writers, artists and other creative people use daydreaming to (9)……………….The areas of the brain used in daydreaming are also used for complicated (10)…………..Questions 11-13
Choose the correct letter, A, B, C or D.11. Daydreams are
A dreams that we have when we fall asleep in daytime.
B about things that happened that make us sad.
C often about things that we would like to happen.
D activities that only a few people are able to do.12. In the nineteenth century, many people believed that daydreaming was
A helpful in factory work.
B a way of avoiding work.
C something that few people did.
D a healthy activity.13. People who daydream a lot
A usually have creative jobs.
B are much happier than other people.
C are less intelligent than other people.
D do not have as many friends as other people.


TRICKY SUMS AND PSYCHOLOGY

In their first years of studying mathematics at school, children all over the world usually have to learn the times table, also known as the multiplication table, which shows what you get when you multiply numbers together. Children have traditionally learned their times table by going from ‘1 times 1 is 1′ all the way up to ’12 times 12 is 144’.

B Times tables have been around for a very long time now. The oldest known tables using base 10 numbers, the base that is now used everywhere in the world, are written on bamboo strips dating from 305 BC, found in China. However, in many European cultures the times table is named after the Ancient Greek mathematician and philosopher Pythagoras (570-495 BC). And so it is called the Table of Pythagoras in many languages, including French and Italian.

C In 1820, in his book The Philosophy of Arithmetic, the mathematician John Leslie recommended that young pupils memories the times table up to 25 x 25. Nowadays, however, educators generally believe it is important for children to memorise the table up to 9 x 9, 10 x 10 or 12 x12.

D The current aim in the UK is for school pupils to know all their times tables up to 12 x 12 by the age of nine. However, many people do not know them, even as adults. Recently, some politicians have been asked arithmetical questions of this kind. For example, in 1998, the schools minister Stephen Byers was asked the answer to 7 x 8. He got the answer wrong, saying 54 rather than 56, and everyone laughed at him.

E In 2014, a young boy asked the UK Chancellor George Osborne the exact same question. As he had passed A-level maths and was in charge of the UK’s economic policies at the time, you would expect him to know the answer. However, he simply said, ‘I’ve made it a rule in life not to answer such questions.’

F Why would a politician refuse to answer such a question? It is certainly true that some sums are much harder than others. Research has shown that learning and remembering sums involving 6,7,8 and 9 tends to be harder than remembering sums involving other numbers. And it is even harder when 6,7,8 and 9 are multiplied by each other. Studies often find that the hardest sum is 6×8, with 7×8 not far behind. However, even though 7×8 is a relatively difficult sum, it is unlikely that George Osborne did not know the answer. So there must be some other reason why he refused to answer the question.

G The answer is that Osborne was being ‘put on the spot’ and he didn’t like it. It is well known that when there is a lot of pressure to do something right, people often have difficulty doing something that they normally find easy. When you put someone on the spot and ask such a question, it causes stress. The person’s heart beats faster and their adrenalin levels go up. As a result, people will often make mistakes that they would not normally make. This is called ‘choking’. Choking often happens in sport, such as when a footballer takes a crucial penalty. In the same way, the boy’s question put Osborne under great pressure. He knew it would be a disaster for him if he got the answer to such a simple question wrong and feared that he might choke. And that is why he refused to answer the question.



Questions 14-19
The text has seven paragraphs, A-G.Which paragraph contains the following information?14. a 19th-century opinion of what children should learn
15. the most difficult sums
16. the effect of pressure on doing something
17. how children learn the times table
18. a politician who got a sum wrong
19. a history of the times tableQuestions 20-25
Do the following statements agree with the information given in the text? For questions 20-25, writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this20. Pythagoras invented the times table in China.
21. Stephen Byers and George Osborne were asked the same question.
22. All children in the UK have to learn the multiplication table.
23. George Osborne did not know the answer to 7 X 8.
24. 7 X 8 is the hardest sum that children have to learn.
25. Stephen Byers got the sum wrong because he choked.


Care in the Community

‘Bedlam’ is a word that has become synonymous in the English language with chaos and disorder. The term itself derives from the shortened name for a former 16th century London institution for the mentally ill, known as St. Mary of Bethlehem. This institution was so notorious that its name was to become a byword for mayhem. Patient ‘treatment’ amounted to little more than legitimised abuse. Inmates were beaten and forced to live in unsanitary conditions, whilst others were placed on display to a curious public as a side-show. There is little indication to suggest that other institutions founded at around the same time in other European countries were much better. Even up until the mid-twentieth century, institutions for the mentally ill were regarded as being more places of isolation and punishment than healing and solace. In popular literature of the Victorian era that reflected true-life events, individuals were frequently sent to the ‘madhouse’ as a legal means of permanently disposing of an unwanted heir or spouse. Later, in the mid-twentieth century, institutes for the mentally ill regularly carried out invasive brain surgery known as a ‘lobotomy’ on violent patients without their consent. The aim was to ‘calm’ the patient but ended up producing a patient that was little more than a zombie. Such a procedure is well documented to devastating effect in the film ‘One Flew Over the Cuckoo’s Nest’. Little wonder then that the appalling catalogue of treatment of the mentally ill led to a call for change from social activists and psychologists alike.Improvements began to be seen in institutions from the mid-50s onwards, along with the introduction of care in the community for less severely ill patients. Community care was seen as a more humane and purposeful approach to dealing with the mentally ill. Whereas institutionalised patients lived out their existence in confinement, forced to obey institutional regulations, patients in the community were free to live a relatively independent life. The patient was never left purely to their own devices as a variety of services could theoretically be accessed by the individual. In its early stages, however, community care consisted primarily of help from the patient’s extended family network. In more recent years, such care has extended to the provision of specialist community mental health teams (CMHTs) in the UK. Such teams cover a wide range of services from rehabilitation to home treatment and assessment. In addition, psychiatric nurses are on hand to administer prescription medication and give injections. The patient is therefore provided with the necessary help that they need to survive in the everyday world whilst maintaining a degree of autonomy.Often, though, when a policy is put into practice, its failings become apparent. This is true for the policy of care in the community. Whilst back-up services may exist, an individual may not call upon them when needed, due to reluctance or inability to assess their own condition. As a result, such an individual may be alone during a critical phase of their illness, which could lead them to self-harm or even become a threat to other members of their community. Whilst this might be an extreme-case scenario, there is also the issue of social alienation that needs to be considered. Integration into the community may not be sufficient to allow the individual to find work, leading to poverty and isolation. Social exclusion could then cause a relapse as the individual is left to battle mental health problems alone. The solution, therefore, is to ensure that the patient is always in touch with professional helpers and not left alone to fend for themselves. It should always be remembered that whilst you can take the patient out of the institution, you can’t take the institution out of the patient.When questioned about care in the community, there seems to be a division of opinion amongst members of the public and within the mental healthcare profession itself. Dr. Mayalla, practising clinical psychologist, is inclined to believe that whilst certain patients may benefit from care in the community, the scheme isn’t for everyone. ‘Those suffering moderate cases of mental illness stand to gain more from care in the community than those with more pronounced mental illness. I don’t think it’s a one-size-fits-all policy. But I also think that there is a far better infrastructure of helpers and social workers in place now than previously and the scheme stands a greater chance of success than in the past.’Anita Brown, mother of three, takes a different view. ‘As a mother, I’m very protective towards my children. As a result, I would not put my support behind any scheme that I felt might put my children in danger… I guess there must be assessment methods in place to ensure that dangerous individuals are not let loose amongst the public but I’m not for it at all. I like to feel secure where I live, but more to the point, that my children are not under any threat.’Bob Ratchett, a former mental health nurse, takes a more positive view on community care projects. ‘Having worked in the field myself, I’ve seen how a patient can benefit from living an independent life, away from an institution. Obviously, only individuals well on their way to recovery would be suitable for consideration as participants in such a scheme. If you think about it, is it really fair to condemn an individual to a lifetime in an institution when they could be living a fairly fulfilled and independent life outside the institution?’



Questions 26-31
Choose the correct letter, A, B, C or D.26. Which of the following statements is accurate?
A In the 20th century, illegal surgical procedures were carried out on the mentally ill.
B The Victorian era saw an increase in mental illness amongst married couples.
C Mental institutions of the past were better-equipped for dealing with the mentally ill.
D In the past, others often benefitted when a patient was sent to a mental asylum.27. What does the writer mean by patient treatment being ‘legitimised abuse’?
A There were proper guidelines for the punishment of mentally ill patients.
B Maltreatment of mentally ill patients was not illegal and so was tolerated.
C Only those who were legally entitled to do so could punish mentally ill patients.
D Physical abuse of mentally ill patients was a legal requirement of mental institutions.28. What brought about changes in the treatment of mentally ill patients?
A A radio documentary exposed patient maltreatment.
B People rebelled against the consistent abuse of mentally ill patients.
C Previous treatments of mentally ill patients were proved to be ineffective.
D The maltreatment of mentally ill patients could never be revealed.29. What was a feature of early care in the community schemes?
A Patient support was the responsibility more of relatives than professionals.
B Advanced professional help was available to patients.
C All mentally ill patients could benefit from the scheme.
D Patients were allowed to enjoy full independence.30. What is true of care in the community schemes today?
A They permit greater patient autonomy.
B More professional services are available to patients.
C Family support networks have become unnecessary.
D All patients can now become part of these schemes.31. What can be said of the writer’s attitude towards care in the community?
A He believes that the scheme has proved to be a failure.
B He believes that it can only work under certain circumstances.
C He believes that it will never work as mentally ill patients will always be disadvantaged.
D He believes it has failed due to patient neglect by professional helpers.Questions 32-36
Look at the following statements, 32-36, and the list of people, A-C.Match each statement to the correct person.A Dr. Mayalla
B Anita Brown
C Bob Ratchett32. This person acknowledges certain inadequacies in the concept of care in the community, but recognises that attempts have been made to improve on existing schemes.
33. This person whilst emphasising the benefits to the patient from care in the community schemes is critical of traditional care methods.
34. This person’s views have been moderated by their professional contact with the mentally ill.
35. This person places the welfare of others above that of the mentally ill.
36. This person acknowledges that a mistrust of care in the community schemes may be unfounded.Questions 37-40
Do the following statements agree with the information given in the text? For questions 37-40, writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this37. There is a better understanding of the dynamics of mental illness today.
38. Community care schemes do not provide adequate psychological support for patients.
39. Dr. Mayalla believes that the scheme is less successful than in the past.
40. The goal of community care schemes is to make patients less dependent on the system.


1. False
2. False
3. True
4. True
5. NG
6. True
7. NG
8. False
9. develop new ideas
10. problem-solving
11. C
12. B
13. D
14. C
15. F
16. G
17. A
18. D
19. B
20. False
21. True
22. True
23. False
24. False
25. NG
26. D
27. B
28. B
29. A
30. B
31. B
32. A
33. C
34. C
35. B
36. B
37. NG
38. False
39. False
40. True


Spot the Difference

A Taxonomic history has been made this week, at least according In the World Wildlife Fund (WWF). a conservation group. Scientists have described a new species of clouded leopard from the tropical forests of Indonesia with spots (or “clouds”, as they are poetically known; smaller than those of other clouded leopards, with fur a little darker and with a double as opposed to a “partial double” stripe down its back.

B However, no previously unknown beast has suddenly leapt out from the forest. In-stead, some scientists have proposed a change in the official taxonomic accounting system of clouded leopards. Where there were four subspecies there will likely now be two species. A genetic analysis and a closer inspection of museum specimens’ coals published in Current Biology has found no relevant difference between three subspecies described 50 years ago from continental Asia and from the Hainan and Taiwan islands. The 5.000-11,000 clouded leopards on Borneo, the 3,000 -7,000 on Sumatra, and the remaining few on the nearby Batu islands can now, the authors say, claim a more elevated distinction as a species.

C What this actually means is fuzzy and whether it is scientifically important is questionable. In any case, biologists do not agree what species and subspecies are. Creatures are given Latin first and second names (corresponding to a genus and species) according to the convention of Carl von Linné, who was born 300 years ago this May. But Linneaus, as he is more commonly known, thought of species as perfectly discrete units created by God. Darwinism has them as mutable things, generated gradually over time by natural selection. So, delineating when enough variation has evolved to justify a new category is largely a matter of taste.

D Take ants and butterflies. Ant experts have recently been waging a war against all types of species subdivision. Lepidopterists, on the other hand, cling to the double barrel second names from their discipline’s 19th-century tradition, and categorise many local subclasses within species found over wide areas. Thus, it would be futile – if one were so inclined – to attempt to compare the diversity of ant and butterfly populations.

E The traditional way around the problem is to call a species all members of a group that share the same gene pool. They can mate together and produce fertile offspring. Whether Indonesian clouded leopards can make cubs with continental ones remains unknown but seems probable. Instead, the claim this week is that genetics and slight differences in fur patterning are enough to justify rebranding the clouded leopard as two significant types. Genetically, that makes sense if many DNA variations correlate perfectly between members of the two groups. The authors did find some correlation, but they looked for it in only three Indonesian animals. A larger sample would have been more difficult.

F One thing is abundantly clear: conservationists who are trying to stop the destruction of the leopards’ habitat in Borneo and Sumatra see the announcement of a new species of big cat as a means to gain publicity and political capital. Upgrading subspecies to species is a strategy which James Mallet, of University College London, likes to call species inflation. It is a common by-product of genetic analysis, which can reveal differences between populations that the eye cannot. Creating ever more detailed genetic categories means creating smaller and increasingly restricted populations of more species. The trouble is that risks devaluing the importance of the term “species”.

G The problem of redefining species by genetics is the creation of taxonomic confusion, a potentially serious difficulty for conservationists and others. The recent proposal to add the polar bear to the list of animals protected under America’s Endangered Species Act is an example. That seems all well and good. However, study the genetics and it transpires that polar bears are closer to some brown bears, than some brown hears are to each other. Go by the genes and it seems that the polar bear would not count as a species in its own right (and thus might not enjoy the protection afforded to species) but should be labelled a subspecies of the brown bear.


Questions 1-4
The text has 7 paragraphs (A – G).Which paragraph contains each of the following pieces of information?1. How it is generally accepted that different species are named
2. The reason that conservationists are happy with the apparent discovery of a new species of leopard
3. How genes could cause a potential problem for conservationists
4. Some scientists want to change the way clouded leopards are classified into species and subspecies.Questions 5-8
Complete the following sentences using NO MORE THAN TWO WORDS from the text for each answer.It is difficult to decide exactly when there is enough (5)………………to say an animal is a new species.It is (6)……………………..to compare the number of species of ant and butterfly.Generally, animals of the same species can make (7)…………………together.Some scientists claim that genetics has led to (8)……………….rather than the actual discovery of new species.Questions 9-13
Do the following statements agree with the information given in Reading Passage? In boxes 9 -13 writeTRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this9. The possible new species of leopard appears different in two ways.
10. Darwinism created a problem with how species are defined.
11. Lepidopterists study ants.
12. Scientists are going to study more clouded leopards in Indonesia.
13. The writer believes that polar bears are not a species in their own right.


The Fertility Bust

A Falling populations – the despair of state pension systems – are often regarded with calmness, even a secret satisfaction, by ordinary people. Europeans no longer need large families to gather the harvest or to look after parents. They have used their good fortune to have fewer children, thinking this will make their lives better. Much of Europe is too crowded as it is. Is this all that is going on? Germans have been agonising about recent European Union estimates suggesting that 30% of German women are, and will remain, childless. The number is a guess: Germany does not collect figures like this. Even if the share is 25%, as other surveys suggest, it is by far the highest in Europe.

B Germany is something of an oddity in this. In most countries with low fertility, young women have their first child late, and stop at one. In Germany, women with children often have two or three, but many have none at all. Germany is also odd in experiencing low fertility for such a long time. Europe is demographically polarised. Countries in the north and west saw fertility fall early, in the 1960s. Recently, they have seen it stabilise or rise back towards replacement level (i.e. 2.1 births per woman). Countries in the south and east, on the other hand, saw fertility rates fall much faster, more recently (often to below 1.3, a rate at which the population falls by half every 45 years). Germany combines both. Its fertility rate fell below 2 in 1971, However, it has stayed low and is still only just above 1.3. This challenges the notion that European fertility is likely to stabilise at tolerable levels. It raises questions about whether the low birth rates of Italy and Poland, say, really are, as some have argued, merely temporary.

C The list of explanations for why German fertility has not rebounded is long. Michael Teitelbaum, a demographer at the Sloan Foundation in New York ticks them off: poor childcare; unusually extended higher education; inflexible labour laws; high youth unemployment; and non-economic or cultural factors. One German writer, Gunter Grass, wrote a novel, “Headbirths”, in 1982, about Harm and Dorte Peters, “a model couple” who disport themselves on the beaches of Asia rather than invest time and trouble in bringing up a baby. “They keep a cat,” writes Mr. Grass, “and still have no child.” The novel is subtitled “The Germans Are Dying Out”. With the exception of this cultural factor, none of these features is peculiar to Germany. If social and economic explanations account for persistent low fertility there, then they may well produce the same persistence elsewhere.

D The reason for hoping otherwise is that the initial decline in southern and eastern Europe was drastic, and may be reversible. In the Mediterranean, demographic decline was associated with freeing young women from the constraints of traditional Catholicism, which encouraged large families. In eastern Europe, it was associated with the collapse in living standards and the ending of pro-birth policies. In both regions, as such temporary factors fade, fertility rates might, in principle, be expected to rise. Indeed, they may already be stabilising in Italy and Spain. Germany tells you that reversing these trends can be hard. There, and elsewhere, fertility rates did not merely fall; they went below what people said they wanted. In 1979, Eurobarometer asked Europeans how many children they would like. Almost everywhere, the answer was two: the traditional two-child ideal persisted even when people were not delivering it. This may have reflected old habits of mind. Or people may really be having fewer children than they claim to want.

E A recent paper suggests how this might come about. If women postpone their first child past their mid-30s, it may be too late to have a second even if they want one (the average age of first births in most of Europe is now 30). If everyone does the same, one child becomes the norm: a one-child policy by example rather than coercion, as it were. If women wait to start a family until they are established at work, they may end up postponing children longer than they might otherwise have chosen. When birth rates began to fall in Europe, this was said to be a simple matter of choice. That was true, but it is possible that fertility may overshoot below what people might naturally have chosen. For many years, politicians have argued that southern Europe will catch up from its fertility decline because women, having postponed their first child, will quickly have a second and third. The overshoot theory suggests there may be only partial recuperation. Postponement could permanently lower fertility, not just redistribute it across time.

F There is a twist. If people have fewer children than they claim to want, how they see the family may change, too. Research by Tomas Sobotka of the Vienna Institute of Demography suggests that, after decades of low fertility, a quarter of young German men and a fifth of young women say they have no intention of having children and think that this is fine. When Eurobarometer repeated its poll about ideal family size in 2001, support for the two-child model had fallen everywhere. Parts of Europe, then, may be entering a new demographic trap. People restrict family size from choice. Social, economic, and cultural factors then cause this natural fertility decline to overshoot. This changes expectations, to which people respond by having even fewer children. That does not necessarily mean that birth rates will fall even more: there may yet be some natural floor, but it could mean that recovery from very low fertility rates proves to be slow or even non-existent.


Questions 14-17
The text has 6 paragraphs (A – F).Which paragraph does each of the following headings best fit?14. Even further falls?
15. One-child policy
16. Germany differs
17. Possible reasonsQuestions 18-22
According to the text, FIVE of the following statements are true.Write the corresponding letters in answer boxes 18 to 22 in any order.A Germany has the highest percentage of childless women
B Italy and Poland have high birth rates
C Most of the reasons given by Michael Teitelbaum are not unique to Germany
D Governments in the Eastern Europe encouraged people to have children
E In 1979, most families had one or two children
F European women who have a child later usually have more soon after
G In 2001, people wanted fewer children than in 1979, according to Eurobarometer research
H Here may be a natural level at which birth rates stop decliningQuestions 23-26
According to the information given in the text, choose the correct answer or answers from the choices given.23. Reasons that ordinary Europeans do not think it is necessary to have as many children include
A less labour needed to farm land
B the feeling that Europe is too crowded
C a general dislike of children24. Michael Teitelbaum adds the following reasons:
A poor children facilities
B longer working hours
C high unemployment amongst young adults25. Initial declines in southern and eastern Europe were because (of)
A the reduced influence of the catholic church
B lower standards of living
C governments encouraged smaller families26. People may have fewer children than they want because
A women are having children at a later age
B they are following the example of other people
C politicians want them to


Teens Try to Change the World, One Purchase at a Time

When classes adjourn here at the Fayerweather Street School, eighth-graders ignore the mall down the street and go straight to the place they consider much cooler: the local natural-foods grocer’s. There, they gather in groups of ten or more sometimes, smitten by a marketing atmosphere that links attractiveness to eating well. When time comes to buy something even as small as a chocolate treat, they feel good knowing a farmer somewhere probably received a good price. “Food is something you need to stay alive,” says eighth-grader Emma Lewis. “Paying farmers well is really important because if we didn’t have any unprocessed food, we’d all be living on candy.”Eating morally, as some describe it, is becoming a priority for teenagers as well as adults in their early 20s. What began a decade ago as a concern on college campuses to shun clothing made in overseas sweatshops has given birth to a parallel phenomenon in the food and beverage industries. Here, youthful shoppers are leveraging their dollars in a bid to reduce pesticide usage, limit deforestation, and make sure farmers are not left with a pittance on payday. Once again, college campuses are setting the pace. Students at 30 colleges have helped persuade administrators to make sure all cafeteria coffee comes with a “Fair Trade” label, which means bean pickers in Latin America and Africa were paid higher than the going rates. Their peers on another 300 campuses are pushing to follow suit, according to Students United for Fair Trade in Washington, D.C.Coffee is just the beginning. Bon Appetit, an institutional food-service provider based in California, relies on organic and locally grown produce. In each year since 2001, more than 25 colleges have asked the company to bid on their food-service contracts. Though Bon Appetit intentionally limits its growth, its collegiate client list has grown from 58 to 71 in that period. “It’s really just been in the last five years that we’ve seen students become concerned with where their food was coming from,” says Maisie Ganzler, Bon Appetit’s director of strategic initiatives. “Prior to that, students were excited to be getting sugared cereal.”To reach a younger set that often does not drink coffee, Fair Trade importer Equal Exchange rolled our a line of cocoa in 2003 and chocolate bars in 2004. Profits in both sectors have justified the project, says Equal Exchange co-president Hob Everts. What is more, dozens of schools have contacted the firm to use its products in fundraisers and as classroom teaching, tools. “Kids often are the ones who agitate in the family’” for recycling and other eco-friendly practices, Mr. Everts says. “So, it’s a ripe audience.”Concerns of today’s youthful food shoppers seem to reflect in some ways the idealism that inspired prior generations to join boycotts in solidarity with farm workers. Today’s efforts are distinct in that youthful consumers say they do not want to make sacrifices. They want high-quality, competitively priced goods that do not require exploitation of workers or the environment. They will gladly reward companies that deliver. One activist who shares this sentiment and hears it repeatedly from her peers is Summer Rayne Oakes, a recent college graduate and fashion model who promotes stylish Fair Trade clothing. “I’m not going to buy something that can’t stand on its own or looks bad just because it’s socially responsible,” Ms. Oakes says. “My generation has come to terms with the fact that we’re all consumers, and we all buy something. So, if I do have to buy food, what are the consequences?”Wanting to ameliorate the world’s big problems can be frustrating, especially for those who feel ineffective because they are young. Marketers are figuring out that teenagers resent this feeling of powerlessness and are pushing products that make young buyers feel as though they are making a difference, says Michael Wood, vice president of Teenage Research Unlimited. His example: Ethos Water from Starbucks, which contributes five cents from every bottle sold to water-purification centres in developing countries. “This is a very easy way for young people to contribute. All they have to do is buy bottled water,” Mr. Wood says. “Buying products or supporting companies that give them ways to support global issues is one way for them to get involved, and they really appreciate that.”Convenience is also driving consumer activism. Joe Curnow, national coordinator of United Students for Fair Trade, says she first got involved about five years ago as a high schooler when she spent time hanging out in cafes. Buying coffee with an eco-friendly label “was a very easy way for me to express what I believed in”, she says. For young teens, consumption is their first foray into activism. At the Fayerweather Street School, Emma Lewis teamed up with classmates Kayla Kleinman and Therese LaRue to sell Fair Trade chocolate, cocoa, and other products at a school fundraiser in November. When the tally reached $8,000, they realised they were striking a chord.Some adults hasten to point out the limitations of ethical consumption as a tool for doing good deeds and personal growth. Gary Lindsay, director of Children’s Ministries, encourages Fair Trade purchases, but he also organises children to collect toys for foster children and save coins for a playground-construction project in Tanzania. He says it helps them learn to enjoy helping others even when they are not getting anything tangible in return. “When we’re benefiting, how much are we really giving? Is it really sacrifice?” Mr. Lindsay asks of Fair Trade products, he says: “Those things are great when we’re given opportunities like that once in a while, but I think for us to expect that we should get something out of everything we do is a very selfish attitude to have.”Questions 27-30
Choose the correct letter A, B or C.27. Trying to change the world through what people purchase began with
A chocolate
B clothing
C coffee28. Bon Appetit had_______colleges using its services in 2006.
A 25
B 58
C 7129. Buying Ethos Water helps provide money for
A poor people in Africa.
B poor farmers.
C clean water projects.30. Joe Curnow first got involved with consumer activism through buying
A coffee
B cocoa
C waterQuestions 31-35
Complete the following sentences using NO MORE THAN ONE WORD from the text for each answer.Eighth-graders from Fayerweather Street School go to the natural-foods grocer’s rather than the (31)………….Bon Appetit limits its growth (32)…………………Previously, young generations were (33)…………………to make sacrifices.Young people can feel frustrated and (34)……………..because of their age.Gary Lindsay (35)………………….people to buy products that make use of Fair Trade.Questions 36-40
Do the following statements agree with the information given in Reading Passage? In boxes 36 – 40 write:TRUE  if the statement agrees with the information
FALSE  if the statement contradicts the information
NOT GIVEN  if there is no information on this36. Fair Trade coffee is more expensive than usual coffee.
37. Bon Appetit used to sell sugared cereal.
38. Rob Everts thinks that kids do not understand about protecting the environment.
39. Summer Rayne Oakes will wear clothes that do not look so good as long as they promote Fair Trade.
40. Gary Lindsay thinks people should do more than just consume ethically.


1. C
2. F
3. G
4. B
5. variation/difference
6. futile
7. fertile offspring
8. species inflation
9. False
10. True
11. False
12. NG
13. NG
14. F
15. E
16. B
17. C
18. A, C, D, G, H
19. A, C, D, G, H
20. A, C, D, G, H
21. A, C, D, G, H
22. A, C, D, G, H
23. A, B
24. A, C
25. A, B
26. A
27. B
28. C
29. C
30. A
31. Mall
32. Intentionally
33. Inspired
34. Ineffective
35. Encourages
36. NG
37. NG
38. False
39. False
40. True