Teški tekstovi

The Golden Gate Bridge was only a far-fetched idea before the 1900s.  The name Golden Gate, which refers to the channel between the two peninsulas where the city of San Francisco and Marin County face each other, is a narrow and turbulent body of water at the entrance to San Francisco Bay.  In the early 1800s, when what is now San Francisco was the small village of Yerba Buena, tank boats provided passage across this narrow strait to the wealthy few who could afford this mode of transportation.  As tiny Yerba Buena grew into the bustling city of San Francisco, there was a need for improved transportation across the channel.  By 1868, there was regular ferry service for workers who lived in Marin County, and the notion of a bridge to span the channel was being circulated.

Spanning the gap between the idea and a feasible plan took time and determination.  In 1916, the San Francisco Bulletin proclaimed that it was time to "bridge the Gate," and a feasibility study of the idea was undertaken by the government.  Joseph Strauss, a well-known builder of bridges, took up the challenge after World War I and submitted his plan in 1921.  The War Department granted the land for the project, Strauss was elected head engineer, and then Strauss's plan was approved by the War Department after the War Department was convinced that there would be a large enough differential between the water level and the bridge for tall ships to be able to get through.

The strategy for financing the bridge was to sell bonds to raise the whopping $35 million needed for the project and then repay the bonds with tolls from the bridge.  A major hurdle to cross was to get the voters behind the financial plan.  To win the voters' support, the district promised to hire only local workers with at least one year of residency, a suggestion that was extremely popular during the era of the Great Depression.  The plan passed with a whopping majority; with the financial issue resolved, official construction of the Golden Gate Bridge began on January 5th, 1933.

Completion of the bridge was celebrated with a parade and groundbreaking ceremonies.  At these ceremonies, a telegram of congratulations from President Hoover was read to the exuberant crowd, and the University of California at Berkeley unveiled an 80-foot model of the bridge constructed by its engineering students.  To spread the wonderful news to the whole state, 250 pigeons were let loose to carry the message of the birth of the Golden Gate Bridge.


 Roman tunnelsThe Persians, who lived in present-day Iran, were one of the first civilizations to build tunnels that provided a reliable supply of water to human settlements in dry areas. In the early first millennium BCE, they introduced the qanat method of tunnel construction, which consisted of placing posts over a hill in a straight line, to ensure that the tunnel kept to its route, and then digging vertical shafts down into the ground at regular intervals. Underground, workers removed the earth from between the ends of the shafts, creating a tunnel. The excavated soil was taken up to the surface using the shafts, which also provided ventilation during the work. Once the tunnel was completed, it allowed water to flow from the top of a hillside down towards a canal, which supplied water for human use. Remarkably, some qanats built by the Persians 2,700 years ago are still in use today.They later passed on their knowledge to the Romans, who also used the qanat method to construct water-supply tunnels for agriculture. Roman qanat tunnels were constructed with vertical shafts dug at intervals of between 30 and 60 meters. The shafts were equipped with handholds and footholds to help those climbing in and out of them and were covered with a wooden or stone lid. To ensure that the shafts were vertical, Romans hung a plumb line from a rod placed across the top of each shaft and made sure that the weight at the end of it hung in the center of the shaft. Plumb lines were also used to measure the depth of the shaft and to determine the slope of the tunnel. The 5.6-kilometer-long Claudius tunnel, built in 41 CE to drain the Fucine Lake in central Italy, had shafts that were up to 122 meters deep, took 11 years to build and involved approximately 30,000 workers.By the 6th century BCE, a second method of tunnel construction appeared called the counter-excavation method, in which the tunnel was constructed from both ends. It was used to cut through high mountains when the qanat method was not a practical alternative. This method required greater planning and advanced knowledge of surveying, mathematics and geometry as both ends of a tunnel had to meet correctly at the center of the mountain. Adjustments to the direction of the tunnel also had to be made whenever builders encountered geological problems or when it deviated from its set path. They constantly checked the tunnel’s advancing direction, for example, by looking back at the light that penetrated through the tunnel mouth, and made corrections whenever necessary. Large deviations could happen, and they could result in one end of the tunnel not being usable. An inscription written on the side of a 428-meter tunnel, built by the Romans as part of the Saldae aqueduct system in modern-day Algeria, describes how the two teams of builders missed each other in the mountain and how the later construction of a lateral link between both corridors corrected the initial error.The Romans dug tunnels for their roads using the counter-excavation method, whenever they encountered obstacles such as hills or mountains that were too high for roads to pass over. An example is the 37-meter-long, 6-meter-high, Furlo Pass Tunnel built in Italy in 69-79 CE. Remarkably, a modern road still uses this tunnel today. Tunnels were also built for mineral extraction. Miners would locate a mineral vein and then pursue it with shafts and tunnels underground. Traces of such tunnels used to mine gold can still be found at the Dolaucothi mines in Wales. When the sole purpose of a tunnel was mineral extraction, construction required less planning, as the tunnel route was determined by the mineral vein.Roman tunnel projects were carefully planned and carried out. The length of time it took to construct a tunnel depended on the method being used and the type of rock being excavated. The qanat construction method was usually faster than the counter-excavation method as it was more straightforward. This was because the mountain could be excavated not only from the tunnel mouths but also from shafts. The type of rock could also influence construction times. When the rock was hard, the Romans employed a technique called fire quenching which consisted of heating the rock with fire, and then suddenly cooling it with cold water so that it would crack. Progress through hard rock could be very slow, and it was not uncommon for tunnels to take years, if not decades, to be built. Construction marks left on a Roman tunnel in Bologna show that the rate of advance through solid rock was 30 centimeters per day. In contrast, the rate of advance of the Claudius tunnel can be calculated at 1.4 meters per day. Most tunnels had inscriptions showing the names of patrons who ordered construction and sometimes the name of the architect. For example, the 1.4-kilometer Cevlik tunnel in Turkey, built to divert the floodwater threatening the harbor of the ancient city of Seleuceia Pieria, had inscriptions on the entrance, still visible today, that also indicate that the tunnel was started in 69 CE and was completed in 81 CE.

Roman shipbuilding and navigation

Shipbuilding today is based on science and ships are built using computers and sophisticated tools. Shipbuilding in ancient Rome, however, was more of an art relying on estimation, inherited techniques and personal experience. The Romans were not traditionally sailors but mostly land- based people, who learned to build ships from the people that they conquered, namely the Greeks and the Egyptians.There are a few surviving written documents that give descriptions and representations of ancient Roman ships, including the sails and rigging. Excavated vessels also provide some clues about ancient shipbuilding techniques. Studies of these have taught us that ancient Roman shipbuilders built the outer hull first, then proceeded with the frame and the rest of the ship. Planks used to build the outer hull were initially sewn together. Starting from the 6th century BCE, they were fixed using a method called mortise and tenon, whereby one plank locked into another without the need for stitching. Then in the first centuries of the current era, Mediterranean shipbuilders shifted to another shipbuilding method, still in use today, which consisted of building the frame first and then proceeding with the hull and the other components of the ship. This method was more systematic and dramatically shortened ship construction times. The ancient Romans built large merchant ships and warships whose size and technology were unequalled until the 16th century CE.Warships were built to be lightweight and very speedy. They had to be able to sail near the coast, which is why they had no ballast or excess load and were built with a long, narrow hull. They did not sink when damaged and often would lie crippled on the sea’s surface following naval battles. They had a bronze battering ram, which was used to pierce the timber hulls or break the oars of enemy vessels. Warships used both wind (sails) and human power (oarsmen) and were therefore very fast. Eventually, Rome’s navy became the largest and most powerful in the Mediterranean, and the Romans had control over what they therefore called Mare Nostrum meaning ‘our sea’.There were many kinds of warship. The ‘trireme’ was the dominant warship from the 7th to 4th century BCE. It had rowers in the top, middle and lower levels, and approximately 50 rowers in each bank. The rowers at the bottom had the most uncomfortable position as they were under the other rowers and were exposed to the water entering through the oar-holes. It is worth noting that contrary to popular perception, rowers were not slaves but mostly Roman citizens enrolled in the military. The trireme was superseded by larger ships with even more rowers.Merchant ships were built to transport lots of cargo over long distances and at a reasonable cost. They had a wider hull, double planking and a solid interior for added stability. Unlike warships, their V-shaped hull was deep underwater, meaning that they could not sail too close to the coast. They usually had two huge side rudders located off the stern and controlled by a small tiller bar connected to a system of cables. They had from one to three masts with large square sails and a small triangular sail at the bow. Just like warships, merchant ships used oarsmen, but coordinating the hundreds of rowers in both types of ship was not an easy task. In order to assist them, music would be played on an instrument, and oars would then keep time with this.The cargo on merchant ships included raw materials (e.g. iron bars, copper, marble and granite), and agricultural products (e.g. grain from Egypt’s Nile valley). During the Empire, Rome was a huge city by ancient standards of about one million inhabitants. Goods from all over the world would come to the city through the port of Pozzuoli situated west of the bay of Naples in Italy and through the gigantic port of Ostia situated at the mouth of the Tiber River. Large merchant ships would approach the destination port and, just like today, be intercepted by a number of towboats that would drag them to the quay.The time of travel along the many sailing routes could vary widely. Navigation in ancient Rome did not rely on sophisticated instruments such as compasses but on experience, local knowledge and observation of natural phenomena. In conditions of good visibility, seamen in the Mediterranean often had the mainland or islands in sight, which greatly facilitated navigation. They sailed by noting their position relative to a succession of recognisable landmarks. When weather conditions were not good or where land was no longer visible, Roman mariners estimated directions from the pole star or, with less accuracy, from the Sun at noon. They also estimated directions relative to the wind and swell. Overall, shipping in ancient Roman times resembled shipping today with large vessels regularly crossing the seas and bringing supplies from their Empire.





The availability of archeological evidence for study is dependent on the natural conditions in which the archeological remains are found; certain types of natural conditions favor preservation of organic substances and therefore lend themselves to sheltering well-preserved organic remains, while other types of natural conditions lead to the degradation or destruction of organic remains that may have existed.  An important distinction in land archeology can be made between dryland and wetland archeological sites. The vast majority of sites are dry sites, which means that the moisture content of the material enveloping the archeological evidence is low and preservation of the organic material as a result is quite poor.  Wetland archeological sites are sites such as those found in lakes, swamps, marshes, and bogs; in these wetland archeological sites, organic materials are effectively sealed in an environment that is airless and wet and that therefore tends to foster preservation. It has been estimated that on a wet archeological site often 90 percent of the finds are organic.  This is the case, however, only when the site has been more or less permanently waterlogged up to the time of excavation; if a wet site has dried out periodically, perhaps seasonally, decomposition of the organic material has most likely taken place. Organic material such as textiles, leather, basketry, wood, and plant remains of all kinds tends to be well preserved in permanently waterlogged sites, while little or none of this type of organic material would survive in dryland archeological sites or in wetland sites that have from time to time dried out.  For this reason, archeologists have been focusing more on wet sites, which are proving to be rich sources of evidence about the lifestyles and activities of past human cultures. A serious problem with archeological finds in waterlogged environments is that the organic finds, and wood in particular, deteriorate rapidly when they are removed from the wet environment and begin to dry out.  It is therefore important that organic finds be kept wet until they can be treated in a laboratory; the need for extraordinary measures to preserve organic finds taken from wetland environments in part explains the huge cost of wetland archeology, which has been estimated to be quadruple the cost of dryland archeology. One wetland site that has produced extraordinary finds is the Ozette site, on the northwest coast of the United States in the state of Washington.  Around 1750, a huge mudslide that resulted from the seasonal swelling of an underground stream completely covered sections of a whaling village located there.  Memories of the village were kept alive by descendants of the surviving inhabitants of the village in their traditional stories, and an archeological excavation of the site was organized.  The mud was removed from the site, and a number of well-preserved cedarwood houses were uncovered, complete with carved panels painted with animal designs, hearths, and benches for sleeping. More than 50,000 artifacts in excellent condition were found, including woven material such as baskets and mats, equipment for weaving such as looms and spindles, hunting equipment such as bows and harpoons, fishing equipment such as hooks and rakes, equipment used for water transportation such as canoe paddles and bailers, containers such as wooden boxes and bowls, and decorative items such as a huge block of cedar carved in the shape of the dorsal fin of a whale and miniature carved figurines.



Today, bicycles are elegantly simple machines that are common around the world. Many people ride bicycles for recreation, whereas others use them as a means of transportation. The first bicycle, called a draisienne, was invented in Germany in 1818 by Baron Karl de Draisde Sauerbrun. Because it was made of wood, the draisienne wasn’t very durable nor did it have pedals. Riders moved it by pushing their feet against the ground. In 1839, Kirkpatrick Macmillan, a Scottish blacksmith, invented a much better bicycle. Macmillan’s machine had tires with iron rims to keep them from getting worn down. He also used foot-operated cranks, similar to pedals, so his bicycle could be ridden at a quick pace. It didn’t look much like the modern bicycle, though, because its back wheel was substantially larger than its front wheel. Although MacMillan's bicycles could be ridden easily, they were never produced in large numbers. In 1861, Frenchman Pierre Michaux and his brother Ernest invented a bicycle with an improved crank mechanism. They called their bicycle a vélocipède, but most people called it a “bone shaker” because of the jarring effect of the wood and iron frame. Despite the unflattering nickname, the vélocipède was a hit. After a few years, the Michaux family was making hundreds of the machines annually, mostly for fun-seeking young people. Ten years later, James Starley, an English inventor, made several innovations that revolutionized bicycle design. He made the front wheel many times larger than the back wheel, put a gear on the pedals to make the bicycle more efficient, and lightened the wheels by using wire spokes. Although this bicycle was much lighter and less tiring to ride, it was still clumsy, extremely top-heavy, and ridden mostly for entertainment. It wasn’t until 1874 that the first truly modern bicycle appeared on the scene. Invented by another Englishman, H. J. Lawson, the safety bicycle would look familiar to today’s cyclists. The safety bicycle had equal-sized wheels, which made it much less prone to toppling over. Lawson also attached a chain to the pedals to drive the rear wheel. By 1893, the safety bicycle had been further improved with air-filled rubber tires, a diamond-shaped frame, and easy braking. With the improvements provided by Lawson, bicycles became extremely popular and useful for transportation. Today, they are built, used, and enjoyed all over the world.



THE SENSORY SYSTEMS OF SHARKS

The well-developed sensory systems of sharks capacitate them with unmatched advantages—in comparison to almost every other animal—when hunting or feeding. 

The sense of smell comprises almost one-third of a shark’s brain. A shark’s sense of smell is so powerful that it can detect perfumes and odors in the water hundreds of meters from their source. Sharks can detect as little as one part per million of substances in the water, such as blood, body fluids, and chemical substances produced by animals under stress. Some sharks can detect as few as ten drops of liquid tuna in the volume of water it takes to fill an average swimming pool. 

Sharks’ eyes detect and capture virtually small movements and they can sense in gloomy conditions, making them effective hunters in virtually dark depths. Like cats and other nocturnal hunters, sharks have a reflective layer in the back of their eyes, called the tapetum lucidum, which magnifies low levels of light. In clear water, sharks see their prey when it is about 20 to 30 meters away. 

Sharks’ eyes also contain specific cells that detect color, and behavioral studies suggest that sharks can see colors as well as black, white, and shades of gray. These studies also revealed that luminous and glimmering objects and bright colors, such as yellow and orange, may attract sharks. 

Sharks employ an extra sensory system—which scientists call the lateral line—to detect vibrations in the water which fish, boats, surfers or even swimmers often create. A narrow strip of sensory cells running along the sides of the body and into the shark’s head comprises the lateral line. This sensory system is especially sensitive to sounds in the low-frequency ranges, such as those which struggling wounded fish or other animals emit. 

Additionally, the functioning of neurons and muscles in living animals create electrical currents which sharks sense in no time. The shark’s electrosensors—the clusters of ampullae of Lorenzini—exist over the shark’s head of all sharks. This reception system is effective only over distances of less than 1 meter. It may aid sharks in the final stages of feeding or attack. Scientists also concede that this system may somehow capacitate sharks to detect the feeble electromagnetic fields of the Earth, ushering them in migration.



The atmosphere of Mars is 95 percent carbon dioxide, nearly three percent nitrogen, and nearly two percent argon with tiny amounts of oxygen, carbon monoxide, water vapor, ozone, and other trace gases. Atmospheric pressure on Mars changes with season. In the fall and winter at the poles of Mars, the temperature gets so low that carbon dioxide snows out of the atmosphere and forms meters-thick deposits of dry ice on the surface.

In the springtime as the surface warms up, the dry ice evaporates back into the atmosphere. The atmospheric pressure also varies with altitude just as it does here on Earth and is about ten times lower on the top of Olympus Mons than on the floor of Hellas Planitia.

Even though the Martian atmosphere contains very little water vapor, clouds and frosts form on Mars and have been studied in detail by telescopes and spacecraft. Wave clouds, spiral clouds, clouds formed near topographic obstacles such as volcanoes, wispy cirrus-like clouds, and a wide variety of hazes and fogs have all been observed. Along with the dust storms and related clouds described above, these features all reveal the Martian atmosphere to be quite dynamic.

Studies indicate that the atmosphere of Mars was much thicker long ago than it is now. A thicker atmosphere would have been able to trap more solar heat, possibly allowing the surface to warm up to the point where water could have remained liquid for long periods of time.

Scientists do not know, however, what the composition of this thicker atmosphere was, and where it went. They theorize that it may have been driven off in a catastrophic impact event, or that the gases reacted with water and got trapped in rocks and minerals on the surface. Scientists also wonder where the liquid water that formerly existed at the surface went.

Some astronomers believe that it seeped into the ground and is still there as ice in the subsurface today. Others think that it may have evaporated and slowly trickled off into space as sunlight broke apart the water vapor molecules over long periods of time. Determining the history of the Martian atmosphere and finding out whether sizable quantities of water still exist there are among the most important goals of Mars exploration today.



Although many companies offer tuition reimbursement, most companies reimburse employees only for classes that are relevant to their positions. This is a very limiting policy. A company that reimburses employees for all college credit courses— whether job related or not— offers a service not only to the employees, but to the entire company.    

One good reason for giving employees unconditional tuition reimbursement is that it shows the company’s dedication to its employees. In today’s economy, where job security is a thing of the past and employees feel more and more expendable, it is important for a company to demonstrate to its employees that it cares. The best way to do this is with concrete investments in them.                             

In turn, this dedication to the betterment of company employees will create greater employee loyalty. A company that puts out funds to pay for the education of its employees will get its money back by having employees stay with the company longer. It will reduce employee turnover, because even employees who don’t take advantage of the tuition reimbursement program will be more loyal to their company, just knowing that their company cares enough to pay for their education.                             

Most importantly, the company that has an unrestricted tuition reimbursement program will have higher quality employees. Although these companies do indeed run the risk of losing money on employees who go on to another job in a different company as soon as they get their degree, more often than not, the employee will stay with the company. And even if employees do leave after graduation, it generally takes several years to complete any degree program. Thus, even if the employee leaves upon graduating, throughout those years, the employer will have a more sophisticated, more intelligent, and therefore more valuable and productive employee. And, if the employee stays, that education will doubly benefit the company: Not only is the employee more educated, but now that employee can be promoted so the company doesn’t have to fill a high - level vacancy from the outside. Open positions can be filled by people who already know the company well.

Though unconditional tuition reimbursement requires a significant investment on the employer’s part, it is perhaps one of the wisest investments a company can make.



For 150 years scientists have tried to determine the solar constant, the amount of solar energy that reaches the Earth. Yet, even in the most cloud-free regions of the planet, the solar constant cannot be measured precisely. Gas molecules and dust particles in the atmosphere absorb and scatter sunlight and prevent some wavelengths of the light from ever reaching the ground. With the advent of satellites, however, scientists have finally been able to measure the Sun's output without being impeded by the Earth's atmosphere. Solar Max, a satellite from the National Aeronautics and Space Administration (NASA), has been measuring the Sun's output since February 1980. Although a malfunction in the satellite's control system limited its observation for a few years, the satellite was repaired in orbit by astronauts from the space shuttle in 1984. Max's observations indicate that the solar constant is not really constant after all. The satellite's instruments have detected frequent, small variations in the Sun's energy output, generally amounting to no more than 0.05 percent of the Sun's mean energy output and lasting from a few days to a few weeks. Scientists believe these fluctuations coincide with the appearance and disappearance of large groups of sun spots on the Sun's disk. Sunspots are relatively dark regions on the Sun's surface that have strong magnetic fields and a temperature about 2,000 degrees Fahrenheit cooler than the rest of the Sun's surface. Particularly large fluctuations in the solar constant have coincided with sightings of large sunspot groups. In 1980, for example, Solar Max's instruments registered a 0.3 percent drop in the solar energy reaching the Earth. At that time a sun spot group covered about 0.6 percent of the solar disk, an area 20 times larger than the Earth's surface. Long-term variations in the solar constant are more difficult to determine. Although Solar Max's data have indicated a slow and steady decline in the Sun's output, some scientists have thought that the satellite's aging detectors might have become less sensitive over the years, thus falsely indicating a drop in the solar constant. This possibility was dismissed, however, by comparing Solar Max's observations with data from a similar instrument operating on NASA's Nimbus 7 weather satellite since 1978.



The immune system is equal in complexity to the combined intricacies of the brain and nervous system. The success of the immune system in defending the body relies on a dynamic regulatory communications network consisting of millions and millions of cells. Organized into sets and subsets, these cells pass information back and forth like clouds of bees swarming around a hive. The result is a sensitive system of checks and balances that produces an immune response that is prompt, appropriate, effective, and self-limiting. At the heart of the immune system is the ability to distinguish between self and non self. When immune defenders encounter cells or organisms carrying foreign or non self molecules, the immune troops move quickly to eliminate the intruders. Virtually every body cell carries distinctive molecules that identify it as self. The body’s immune defenses do not normally attack tissues that carry a self-marker. Rather, immune cells and other body cells coexist peaceably in a state known as self-tolerance. When a normally functioning immune system attacks a non self molecule, the system has the ability to “remember” the specifics of the foreign body. Upon subsequent encounters with the same species of molecules, the immune system reacts accordingly. With the possible exception of antibodies passed during lactation, this so called immune system memory is not inherited. Despite the occurrence of a virus in your family, your immune system must “learn” from experience with the many millions of distinctive non elf molecules in the sea of microbes in which we live. Learning entails producing the appropriate molecules and cells to match up with and counteract each non self invader. Any substance capable of triggering an immune response is called an antigen. Antigens are not to be confused with allergens, which are most often harmless substances (such as ragweed pollen or cat hair) that provoke the immune system to set off the inappropriate and harmful response known as allergy. An antigen can be a virus, a bacterium, a fungus, a parasite, or even a portion or product of one of these organisms. Tissues or cells from another individual (except an identical twin, whose cells carry identical self-markers) also act as antigens; because the immune system recognizes transplanted tissues as foreign, it rejects them. The body will even reject nourishing proteins unless they are first broken down by the digestive system into their primary, nonantigenic building blocks. An antigen announces its foreignness by means of intricate and characteristic shapes called epitopes, which protrude from its surface. Most antigens, even the simplest microbes, carry several different kinds of epitopes on their surface; some may even carry several hundred. Some epitopes will be more effective than others at stimulating an immune response. Only in abnormal situations does the immune system wrongly identify self as non self and execute a misdirected immune attack. The result can be a so-called autoimmune disease such as rheumatoid arthritis or systemic lupus erythematosis. The painful side effects of these diseases are caused by a person’s immune system actually attacking itself.



 Coffee is an internationally beloved beverage, of which 2.25 billion cups are consumed each day around the world. It not only provides billions of people with energy for their workdays, but it also supports the livelihoods of over 120 million people in 70 countries. However all of this will soon be at risk. With the changes in climate predicted in the near future, many of the current coffee production areas in mountainous, tropical regions, will become untenable for the crop. Areas that are currently suitable for growing coffee will likely be reduced by 50 percent by the year 2050, yet demand for the drink is expected to double by that same year. The primary species of coffee that are cultivated for mass consumption are Coffea arabica, commonly known as Arabica coffee, and Coffea canephora, commonly called Robusta coffee. Of the two, Arabica has the classic rich and smooth coffee taste, while Robusta is not as palatable to the broader public, having a harsher taste and being used mainly in less expensive coffee blends. Arabica coffee, which grows in the narrow region known as the Coffee Belt (stretching from Central America through sub-Saharan Africa and southern Asia), is difficult to cultivate as it requires a stable environment with specific temperatures and amounts of precipitation. A change in temperature of just a couple of degrees reduces its quality and taste profile. Because of this, rising temperatures and altered rainfall patterns are already causing a variety of problems for coffee farms growing Arabica, which is now officially endangered. Some production can be moved to higher elevations where temperatures are lower, but not all regions have this option. This means that over the next few decades, along with decreasing crop yields, the taste and quality of coffee is likely decrease, just as the price will increase. Not only are the cultivated varieties of coffee in danger, but also six in 10 species of wild coffee are at risk of extinction. Wild coffee is an important genetic resource for coffee production. The wild species help maintain the diversity and thus stability of all coffee plants, since they can be used as a source of seed and for creating hybrid varieties that are hardier. The majority of the wild coffee species are found in sub-Saharan Africa and Madagascar, where deforestation, human encroachment, and disease are killing them off. It is estimated that wild coffee will be extinct by 2080. While those predictions may seem far off, current warmer temperatures are already exacerbating the threat of diseases which attack coffee plants such as coffee rust, and detrimental pests such as the coffee berry borer. Up until recently the coffee berry borer was only found at altitudes up to 1,500 meters above sea level, but it is now being found at higher altitudes due to warmer temperatures and increased humidity. Already this pest alone causes annual losses of hundreds of millions of dollars in coffee beans. Thus, as the climate changes and diseases and pests increase, there will likely be a surge in pesticide use - another factor that will reduce the quality of the final product. Scientists have been developing strategies to try to cope with the effects of climate change on coffee production for years, including the creation of a gene bank to preserve the genetic diversity of coffee and a catalog of coffee plants with information on their preferred climate and pest susceptibility. On the ground, coffee retailers and institutions have been working with farmers to develop and implement sustainable agricultural practices. Finally, a main goal is to breed hybrid disease- and climate-resilient strains of coffee. These changes may take decades to bear fruit, but will no doubt be well-received by coffee farmers and the coffee- drinking public alike.




     Much as an electrical lamp transforms electrical energy into heat and light, the visual "apparatus" of a human being acts as a transformer of light into sight. Light projected from a source or reflected by an object enters the cornea and lens of the eyeball. The energy is transmitted to the retina of the eye whose rods and cones are activated.     The stimuli are transferred by nerve cells to the optic nerve and then to the brain. Man is a binocular animal, and the impressions from his two eyes are translated into sight a rapid, compound analysis of the shape, form, color, size, position, and motion of the things he sees.     Photometry is the science of measuring light. The illuminating engineer and designer employ photometric data constantly in their work. In all fields of application of light and lighting, they predicate their choice of equipment, lamps, wall finishes, colors of light and backgrounds, and other factors affecting the luminous and environmental pattern to be secured, in great part from data supplied originally by a photometric laboratory. Today, extensive tables and charts of photometric data are used widely, constituting the basis for many details of design.     Although the lighting designer may not be called upon to do the detailed work of making measurements or plotting data in the form of photometric curves and analyzing them, an understanding of the terms used and their derivation form valuable background knowledge.     The perception of color is a complex visual sensation, intimately related to light. The apparent color of an object depends primarily upon four factors: its ability to reflect various colors of light, the nature of the light by which it is seen, the color of its surroundings, and the characteristics and state of adaptation of the eye.  In most discussions of color, a distinction is made between white and colored objects. White is the color name most usually applied to a material that diffusely transmits a high percentage of all the hues of light. Colors that have no hue are termed neutral or achromatic colors. They include white, off-white, all shades of gray, down to black.     All colored objects selectively absorb certain wave-lengths of light and reflect or transmit others in varying degrees. Inorganic materials, chiefly metals such as copper and brass, reflect light from their ~surfaces~. Hence we have the term "surface" or "metallic" colors, as contrasted with "body" or "pigment" colors. In the former, the light reflected from the surface is often tinted.     Most paints, on the other hand, have body or pigment colors. In these, light is reflected from the surface without much color change, but the body material absorbs some colors and reflects others; hence, the diffuse reflection from the body of the material is colored but often appears to be overlaid and diluted with a "white" reflection from the glossy surface of the paint film. In paints and enamels, the pigment particles, which are usually opaque, are suspended in a vehicle such as oil or plastic. The particles of a dye, on the other hand, are considerably finer and may be described as coloring matter in solution. The dye particles are more often transparent or translucent.


Most managers can identify the major trends of the day. But in the course of conducting research in a number of industries and working directly with companies, we have discovered that managers often fail to recognize the less obvious but profound ways these trends are influencing consumers’ aspirations, attitudes, and behaviors. This is especially true of trends that managers view as peripheral to their core markets. Many ignore trends in their innovation strategies or adopt a wait-and-see approach and let competitors take the lead. At a minimum, such responses mean missed profit opportunities. At the extreme, they can jeopardize a company by ceding to rivals the opportunity to transform the industry. The purpose of this article is twofold: to spur managers to think more expansively about how trends could engender new value propositions in their core markets, and to provide some high-level advice on how’ to make market research and product development personnel more adept at analyzing and exploiting trends. One strategy, known as ‘infuse and augment’, is to design a product or service that retains most of the attributes and functions of existing products in the category but adds others that address the needs and desires unleashed by a major trend. A case in point is the Poppy range of handbags, which the firm Coach created in response to the economic downturn of 2008. The Coach brand had been a symbol of opulence and luxury for nearly 70 years, and the most obvious reaction to the downturn would have been to lower prices. However, that would have risked cheapening the brand’s image. Instead, they initiated a consumer-research project which revealed that customers were eager to lift themselves and the country out of tough limes. Using these insights. Coach launched the lower-priced Poppy handbags, which were in vibrant colors, and looked more youthful and playful than conventional Coach products. Creating the sub-brand allowed Coach to avert an across-the-board price cut. In contrast to the many companies that responded to the recession by cutting prices. Coach saw the new consumer mindset as an opportunity for innovation and renewal. A further example of this strategy was supermarket Tesco’s response to consumers’ growing concerns about the environment. With that in mind. Tesco, one of the world’s top five retailers, introduced its Greener Living program, which demonstrates the company’s commitment to protecting the environment by involving consumers in ways that produce tangible results. For example. Tesco customers can accumulate points for such activities as reusing bags, recycling cans and printer cartridges, and buying home-insulation materials. Like points earned on regular purchases, these green points can be redeemed for cash. Tesco has not abandoned its traditional retail offerings but augmented its business with these innovations, thereby infusing its value proposition with a green streak. A more radical strategy is ‘combine and transcend’. This entails combining aspects of the product s existing value proposition with attributes addressing changes arising from a trend, to create a novel experience – one that may land the company in an entirely new market space. At first glance, spending resources to incorporate elements of a seemingly irrelevant trend into one’s core offerings sounds like it’s hardly worthwhile. But consider Nike’s move to integrate the digital revolution into its reputation for high-performance athletic footwear. In 2006, they teamed up with technology company Apple to launch Nike-f. a digital sports kit comprising a sensor that attaches to the running shoe and a wireless receiver that connects to the user’s iPod, By combining Nike’s original value proposition for amateur athletes with one for digital consumers, the Nike • sports kit and web interface moved the company from a focus on athletic apparel to a new plane of engagement with its customers. A third approach, known as ‘counteract and reaffirm’, involves developing products or services that stress the values traditionally associated with the category in ways that allow consumers to oppose or at least temporarily escape from the aspects of trends they view as undesirable. A product that accomplished this is the ME2, a video game created by Canada’s iToys. By reaffirming the toy category’s association with physical play, the ME2 counteracted some of the widely perceived negative impacts of digital gaming devices. Like other handheld games, the device featured a host of exciting interactive games, a lull-color LCD screen, and advanced 3D graphics. What set it apart was that it incorporated the traditional physical component of children’s play: it contained a pedometer, which tracked and awarded points for physical activity (walking, running, biking, skateboarding, climbing stairs). The child could use the points to enhance various virtual skills needed for the video game. The ME2, introduced in mid- 2008, catered to kids’ huge desire to play video games while countering the negatives, such as associations with lack of exercise and obesity. Once you have gained perspective on how trend-related changes in consumer opinions and behaviors impact on your category, you can determine which of our three innovation strategies to pursue. When your category’s basic value proposition continues to be meaningful for consumers influenced by the trend, the infuse-and-augment strategy will allow you to reinvigorate the category. If analysis reveals an increasing disparity between y our category and consumers’ new focus, your innovations need to transcend the category to integrate the two worlds. Finally, if aspects of the category clash with undesired outcomes of a trend, such as associations with unhealthy lifestyles, there is an opportunity to counteract those changes by reaffirming the core values of your category. Trends – technological, economic, environmental, social, or political – that affect how people perceive the world around them and shape what they expect from products and services present firms with unique opportunities for growth.