Monday, September 30, 2019

Business Lobbying Essay

The topic – It is always better to have clarity on the topic as it allows a clear flow of ideas. Lobbying, in fact, are the attempts made by certain corporate groups to influence the direction of legislative policy of a country/state in such a manner so as to bring benefits to them and safeguard their interests. The objective can be achieved by influencing legislators, members of Parliament and create a lobby to bring forth and get the favourable legislations passed. A lobbyist may be an individual or a group of individuals working for their employer or as an agent to them. Such people can be leaders of labour unions, corporate representatives, legislators, bureaucrats, and leading advocates, exercising influence in legislative circles or other private interest groups. India does not have any clear regulation for or against lobbying, especially when it comes by the name of lobbying. But it is not legal either. Lobbying has now become a well-established service industry, although known by different names such as public relations, external affairs managers, environment management experts, etc. Various established associations, federations, confederations of industry & commerce, etc function as lobbyists to get policies framed in favour of corporates. Dilip Cherian, a known lobbyist and founder of Perfect Relations, states that lobbying functions as a bridge between companies and the government. He speaks in no ambiguous tone, â€Å"We help our clients understand the policy environment of the country. We help them identify key players and their positions in the policy area. The key players could be political parties, bureaucrats, the central government, panchayat, etc.† The lobbying industry has been placing its demand for clear and transparent laws in countries like India where no clarity on the issue is available. So, it is high time that India should decide on making lobbying either legal or illegal by framing a detailed and clear policy. When you speak in favour of the topic i.e. lobbying should be made legal in India, the key points may be: 1. Whenever there have been some big leaps in policy framing in India favouring corporates in one or the other, the issue of lobbying has always come up. Whether it was Enron – the Dabhol power project in Maharshtra, foreign investment in corporate sector, big defence purchases, infrastructure development and now foreign direct investment (FDI) in multi brand retail, all have been shadowed by the issue of lobbying. The person or the company lobbying for certain favour cannot do so till the government, legislative bodies – Parliament or state legislatures – have not considered some path to move on. Lobbying would only smoothen the process. 2. Various chambers of commerce such as FICCI and CII, National Association of Software and Services Companies, and private firms like Vaishnavi Corporate Communications owned by Niira Radia and DTA Associates managed by Deepak Talwar are among top lobby groups. These organizations, however, maintain that they are not lobby groups and work to exercise influence to engage with the government on the policy issues. When so much of lobbying is done by the registered and legal firms and companies in the guise of some or other name and it is a well known fact, making lobbying legal will add to the government’s income by levying good amount of fee and charges on the same. Where does the amount, being paid now on lobbying, go – is anybody’s guess. A transparent legislation will definitely solve this ambiguity and loss of income. 3. The US and some European countries have made lobbying legal with specific conditions like quarterly disclosures on amount spent and the manner in which the same has been spent or so. This provides vital information and transparency to lobbying practices. The furor raised in Parliament over the issue of lobbying by Walmart in the USA could come up due to its disclosures. Corporate giants such as WalMart, Pfizer, Dell, HP, Qualcomm, Alcatel-Lucent, Morgan Stanley and Prudential Financial have been eyeing the Indian market for a long time and have spent millions of dollars to have their business interest move at a faster pace in the growing Indian economy. With the potential growth, more and more companies will engage lobbyists who can directly interact with politicians and bureaucrats and push their agenda. Lobbying, whether legal or illegal, will continue to remain integral to Indian businesses and politics. Doing away with it or making it illegal is not an option. It will be better to make business lobbying legal, of course with certain specific clauses to ensure transparency. 4. Making lobbying legal will bring forward open debates and discussions on all the forums. It will be possible to understand which option is better. Lobbyists and representatives of their companies will openly participate in such debates with the pros and cons on the performance and product. 5. At present, only the section 7 of the Prevention of Corruption Act may be invoked to call lobbying illegal. This section is not very sound. Think of the money spent on lobbying in a single year. If lobbying is made legal, at least a part of it will find its way to the government coffer. At present, it forms a part of unaccounted money going into the pockets of politicians, bureaucrats and other influential lot, the cost of which will eventually be recovered from the common people in the country. 6. Apart from saving millions of dollars, the country may see rampant corruption in the name of lobbying fading away. 7. Since India is in the process of establishing a larger institutional framework, the government needs creative inputs from various experts. As long as lobbying does not lead to ‘policy or regulatory capture’, it should be allowed. 8. The Indian government itself has a lobby firm presenting its case with American lawmakers, while a number of Indian companies and entities also indulge in lobbying activities in the US through their respective lobbyists. At various platforms like in the UN, World economic summits, in sports, in organizing Olympics, Commonwealth Games, etc, countries lobby their stake. Lobbying, in fact, brings more competitiveness and improvement in quality as things are to be explained and highlighted in comparison to any other stake holder. India would gain a lot by making lobbying legal. When you speak against the topic, the key points may be: 1. The common man of India, who is otherwise reeling under the pressure of corruption and unemployment, will be left penniless once lobbying is made legal. All the majors will lobby for their interests in the economy, will facilitate the entry riding the common man who hardly earns his bread and butter. Those who have more power and pelf will become greater lobbyist and will ensure that their interests are not compromised. 2. National interests will be cornered as lobbyists will have one-line motto of watching their own interest and will not at all be concerned about the country’s interest as they will not be from this country. 3. Lobbyists will make corruption legal. Politicians and influential people will still garner their share from lobbyists at the cost of the nation. 4. Legislators, who are law makers, if influenced by lobbyists, may get inclined towards serving them, becoming oblivious of the national interests. 5. Lobbying in defence production and purchases might put national security at stake. 6. India is a vast country and has a lot of complexities and problems. The lobbying company has no perception of the diversity and the nature of problems. The government might simply gamble on the tactics of the lobbyist and that might become harmful in future. 7. There is no mechanism in India to bring accountability to lobbying, and publicly reveal the lobbying positions of companies and the money spent. Self-regulation in lieu of a formal legislation is often proposed by industry players. In India, nobody knows the lobbying position of companies, leave alone looking for consistencies in lobbying positions and their impact on issues on sustainable development. Making it legal will add to the woes of Indian businesses. The efforts made so far in India- The Planning Commission has set up an expert group to look into the processes that comprise lobbying. Arun Maira, member of Planning Commission, stated â€Å"We will be considering various interests of all the stakeholders involved. This expert group comprises industries and government secretaries. There is an on-going dialogue with the industry associations for their views. We want lobbying to be transparent and representative. We are looking at the best benchmarks for processes of lobbying in other countries. However, this is a very large issue and the final solution is far down the road.† However, given the political exigencies of framing policies and complex nature of polity, this task will require the consummate skills of great statesmen.

Sunday, September 29, 2019

Jamestown and Massachusetts Bay Essay

Both the colonies of Massachusetts Bay and Jamestown were different in that Massachusetts Bay consisted of mostly puritans; Massachusetts Bay was settled by Europeans. Both settlements struggled to survive at first. They both also encountered natives living there before they arrived. In Virginia there were the Native Americans and in Massachusetts Bay there was a large number of Puritans. Although there were many differences between the two colonies it comes to no surprise that they are very much so related, mostly in their hardships. Such as in Virginia there was disease, famine and continuing attacks of the neighboring Native Americans which took a tremendous toll on the population. Only sixty out of the original two hundred and fourteen settlers at Jamestown survived. While at the Massachusetts Bay, the settlers had their hardships too. The long, harsh winters, the unfertile soil, and the unfriendly relationship with the puritans surely made the population shrink. Both colonies struggled with finding nourishment. They survived mostly on the crops that they grew or wild berries and vegetables found in the wilderness. In the winter the crops soon started to die and there was nothing for the settlers to eat. This famine therefore, being the main cause of the population decrease. In the winter, the temperature would drop so low that if you didnt wear several coats of animal fur to keep you warm, you wouldnt stay alive. In Jamestown (Virginia) the settlers were being faced with the danger of the being under attack every day. The Native Americans did not take kindly to the settlers and found it an invasion of land. They were under the impression the settlers were only staying a short time, and would not take over the territory. The settlers had other plans however, to claim the land for King James. In summary, the colonies of Massachusetts Bay and Jamestown were alike in their hardships. Their population downfalls were also quite similar. They were different, in the people in which they had living in each colony, also the enemies that both colonies established. Jamestown and Massachusetts Bay  were great civilizations that started our society as a whole, where would we be without them? Sources http://www.nps.gov/jame/historyculture/jamestown-and-plymouth-compare-and-contrast.htm

Saturday, September 28, 2019

The Process of Product Analysis Essay Example | Topics and Well Written Essays - 1000 words

The Process of Product Analysis - Essay Example Watermelon is a nutritional fruit known to have originated from West Africa. Popular belief has it that watermelons are generally made up of water and sugar. However, studies have shown that it is a nutrient-dense fruit. Watermelons have high amounts of vitamins, antioxidants, and minerals. It is generally a low-calorie fruit which explains the high consumption rates in the US. The nutritional values are beneficial in curing various diseases like high blood pressure, cancers, asthma, hydration, and inflammations. Watermelons are readily available in the US as they are easily grown and don’t require lots of input. Most Americans grow them in their backyard. They generally thrive in hot and dry weather. A ripe watermelon is more sweat compared to those that are less mature. The popularity of the fruit is more during the summer and picnics due to their sweetness and their aid in combating the heat. There are different types of watermelons: seedless, yellow, orange, seeded and min i which is also known as personal. Seeded watermelons are the most popular type in the US. They are fairly cheap ranging between $2 and $4 per watermelon. Watermelons can take up to a week if well stored. The perishability of them is higher compared to other fruits like oranges and passion fruits. Watermelons are popular in the restaurants as they provide a good dessert. Restaurants offer watermelons in their menus in various forms. Watermelon can be blended to make juice and smoothies.

Friday, September 27, 2019

Democracy is the Best Form of Government Essay Example | Topics and Well Written Essays - 500 words

Democracy is the Best Form of Government - Essay Example In a democratic government, the people are allowed to engage in free market and free enterprise. They can choose which type of industry or business they can engage to earn a living. Moreover, people are allowed to own personal and real properties without limitation. The other important rights granted by any constitutional government are the freedom of the press, of speech, of assembly and to form associations. People can publish or broadcast in the media their advocacy and opinions without fear of reprisal or incarceration by authorities and government agents. They can speak against the government and even criticize the appointed and elected leaders of the country. The citizens can also form organizations and associations for whatever purpose, except that of overthrowing the government or fomenting terror, fear or criminal acts. Democracy is not the best kind of government for the people because it is lax in monitoring how people exercise their rights. The people can engage in any business including the buying and selling of guns and ammunitions. It is not uncommon in a democratic country for ordinary citizens to own a gun and go on a rampage shooting that kills many innocent civilians. In the US, this type of incident happens even inside school campuses (Girl Critical, pars. 32-33; van Wagtendonk, par. 1). The citizens are allowed to form any organization that can present a harmless front but with a malicious motivation, such as sowing terror among the people.

Thursday, September 26, 2019

Pearl Harbor Essay Example | Topics and Well Written Essays - 500 words

Pearl Harbor - Essay Example The American never thought that Japan was capable of performing such an act. However, rumours of Japan’s surprise mass attack had been reported but were not taken seriously. (Conn, Engelman, and Fairchild, 2000) The answer I think it was the former, though the strategy was a good one it had completely gone wrong. Now you may ask why? The Japanese had developed the technology, attack strategy and skills to successfully accomplish the impossible as they had been planning this for at least six months prior to the main attack. The Japanese had good defensive plans against the US. Such as fortifying individual islands with troops, reinforcing air squadrons and also they kept a large fleet to retaliate in case the US attacked. So what went wrong? Its simple, the Japanese became over confident and they changed their plans and instead implementing this defensive plan they went further to attack the US Midway. It turned into a big disaster and the Japanese lost a huge number of their carriers and thus with their naval and air forces which were in a weak situation now, they could not resist the American troops and they reached the Japanese air space itself. Another factor of this raid to go wrong was that the commander of the Japanese fleet became nervous and he aborted the third strike attack of the two which had already taken place, which was aimed at the oil supplies and repair facilities of the US fleet. If this had taken place then the US would have had a difficult time in retaliating. The attack plan of the Japanese would require the loss of one –third attacking force of the planes and two carriers. Another problem was of refuelling the planes over the pacific and also fixing of wooden fins on naval torpedoes so as to stabilize them in shallow waters. (Conn, Engelman, and Fairchild, 2000) Many messages were dispatched about the movements of the Japanese fleet but the American ignored them as they thought a formal war declaration would

Wednesday, September 25, 2019

Gentiva Health Services assignment Essay Example | Topics and Well Written Essays - 750 words

Gentiva Health Services assignment - Essay Example The adoption of differential costing would greatly help the company. Differential costing helps evaluate the difference in total costs and expected revenue. This information is necessary in choosing where to invest or not to invest. In addition, it will help evaluate the incremental benefits resulting from an acquisition or a disposition. In a situation when the company requires closing down a new firm, the company would be able to weigh the detrimental costs likely to occur as a result of the closure. Gentiva is on the verge of making several decisions to remain relevant and stable in the current economic demands. Since differential costs involve the assessment of costs and revenues arising as a result of taking a given alternative, the company needs to employ the method to reach the best decisions. In turn, the company will be able to cope with the proposed law to cut health services and hospice care.The effects of the health care reform on the providers of health services are clea rly evident. The reform requires that costs of health services be reduced by at least 3.5% every year.   This implies reduced returns for services offered by Medicare companies. Due to the health services reform, Gentiva is on the move to diversify its operations. The company aims at capitalizing on the provision of Medicare services to the robust ageing American population. Chances are high that there will be several incidences of disease with the ageing population. Other likely effects include employment lay-offs.

Tuesday, September 24, 2019

Managing Ethichs and Social responsibility Research Paper

Managing Ethichs and Social responsibility - Research Paper Example In relation to the conception of business ethics along with social responsibility, Business Ethic Management (BEM) is viewed to be a process of analysing as well as minimising ethical issues or problems with the application of certain specific programs and effectual practices. There are varied important elements that are applied by different organisations for minimising ethical problems or issues that include formulating effectual mission statements and establishing standard code of ethics. It has been apparently observed that the people involved with an organisation are provided with business ethics related education as well as training in order to effectively operate their respective business activities in accordance with organisational objectives. Moreover, business operations of an organisation are required to be audited as well as reported in an appropriate manner in order to effectively analyze business performances by a significant level (University of Bahrain, n.d.). News Int ernational or NI Group Ltd (NI) is regarded as one of the renowned as well as the biggest publishers of famous British newspapers. The well-known newspapers published by NI are The Sunday Times and The Times. These newspapers which publish by the organisation are considered to be the best in consideration to quality. Furthermore, another well-known newspaper of the organisation named The Sun is regarded to be one of the most read newspapers in the UK that accounts to seven million readers per day (NI Group Limited, 2012). This discussion will emphasize upon analysing ethical as well as social issues that faced by NI. Moreover, the discussion will further focus upon the techniques as well as the standards that adopted by the organisation in minimising all these identified issues that are pertaining within the organisation. Ethical and Social Issues of NI NI is one the famous newspaper publisher in the UK that faced several issues or problems which have been identified to hamper the p erformances as well as the business ethics of the organisation at large. The organisation has faced the problems due to corrupt practices that were performed for investigating certain crime related activities or news reports. The major ethical problem that faced by the organisation is the phone-hacking investigation procedure. Moreover, the staff members of the organisation were also alleged that they were involved in accessing messages of general public along with comprising the celebrities as well as the politicians (Davies, 2009). Furthermore, one of the staff members named Clive Goodman as well as two other members of NI was identified to be involved in tapping the phone calls of Prince William, a member of royal family (Day, 2006). There was another event that hampered the ethical standards of the organisation considering the case of Milly Dowler who was murdered. In regard to this case, the members of NI were alleged that they erased messages from Milly Dowler’s mobile phone with the intention of acquiring future messages (Muller, 2012). Furthermore, the organisation is also charged with the crime of providing bribery to public officials for acquiring important information

Monday, September 23, 2019

Resistor Lab Report Example | Topics and Well Written Essays - 1000 words

Resistor - Lab Report Example Current is constant in a series circuit while voltage is constant in a parallel circuit (ANWAR, HALL, PRASAD and ROFFEY,1998). Voltage is defines as the measure of the potential difference between two terminals in an electric circuit or electric apparatus. Current is defined as the flowing charge in an electric circuit or electric apparatus. Resistance is the measure of the tendency of an electric apparatus to hinder electric charge from flowing through a given circuit (NAHVI and EDMINISTER 2004). A series circuit is one in which the positive terminal is connected to the negative terminal of the circuit. Any gap that is induced in a series circuit, by say the break-down of a given apparatus in the circuit hampers electric charge from flowing in the entire series circuit. A parallel circuit is one in which at some terminals of the circuit, positive terminals are connected to other positive terminals and negative terminals are connected to other negative terminals. In this regard, a gap introduced at a given point of the circuit does not get the electric flow of charge in the entire circuit to stop. In a series circuit, the current at any point of the circuit is the same for the whole circuit. This is unlike the case in a parallel circuit where the current at one point of the circuit is not necessarily the same as the current in the other points of the circuit (SCIENCE AND TECHNOLOGY FOR CHILDREN, NATIONAL SCIENCE RESOURCES CENTER, NATIONAL ACADEMIES and SMITHSONIAN INSTITUTION, 2004). The voltage in a parallel connection is limited to that of the smallest voltage source connected in the circuit. On the contrary, in a series connection, the voltage of the circuit is determined by the number of the individual voltage sources connected. The more they are connected, the higher the circuit voltage gets. Kirchhoff’s 1st law implies that the sum of all the current that is entering a given point or

Sunday, September 22, 2019

Psychometric Testing Business Proposal Essay Example | Topics and Well Written Essays - 2500 words

Psychometric Testing Business Proposal - Essay Example It will rationalise the arguments for seeking an external experienced provider in order to advise and action that implementation. To create awareness amongst branch, regional and national management of the necessity to improve retention levels of existing telephone sales operatives within the organisation. To introduce new methods at the recruitment stage in order to achieve this. Typically this indicates that with a drop-out rate of approximately 3 new staff a month, HFC is losing on average  £1,944 each year just from the loss of newly recruited CAM staff leaving the company after just one month of employment. If applied and monitored successfully the implementation of psychometric testing could be used more widely across other departments within the organisation to ensure greater levels of high performance amongst staff, in addition to overall long term improvements in staff satisfaction. A number of companies have been identified and researched with a view to obtaining the best approach to this methodology and an ability to advise accordingly in terms of the principal objectives of this proposal. It is estimated that an assessment of all the identified specialists in psychometric testing will be presented and the successful tender agreed upon by July 2008. With a new strategic approach to the recruitment process enforced by August 2008. The current questioning system employed by HFC relies on 24 questions which have a numeric scoring system. This does not allow for any thorough quantative or qualitative data to be recorded or investigated for accurate results specific to the candidate. It is clear that this present system affords updating to inform a more accurate interpretation of the candidates commitment, knowledge and realistic expectations prior to being considered for employment. The scope of this proposal is to make a case for

Saturday, September 21, 2019

History of Global Warming Essay Example for Free

History of Global Warming Essay The succession of exceptional years with record high temperatures, which characterized the 1980s, helped to generate widespread popular interest in global warming and its many ramifications. The decade included six of the warmest years in the past century, and the trend continued into the 1990s, with 1991 the second warmest year on record. All of this fuelled speculation especially among the media that the earth’s temperature had begun an inexorable rise and the idea was further reinforced by the results of scientific studies which indicated that global mean temperatures had risen by about 0. Â °C since the beginning of the century. Periods of rising temperature are not unknown in the earth’s past. The most significant of these was the so-called Climatic Optimum, which occurred some 5,000-7,000 years ago and was associated with a level of warming that has not been matched since. If the current global warming continues, however, the record temperatures of the earlier period will easily be surpassed. Temperatures reached during a later warm spell in the early Middle Ages may well have been equaled already. More recently, the 1930s provided some of the highest temperatures since records began, although that decade has been relegated to second place by events in the 1980s. Such warm spells have been accepted as part of the natural variability of the earth/ atmosphere system in the past, but the current warming is viewed in a different light. It appears to be the first global warming to be created by human activity. The basic cause is seen as the enhancement of the greenhouse effect, brought on by rising levels of anthropogenically-produced greenhouse gases. It is now generally accepted that the concentrations of greenhouse gases in the atmosphere have been increasing since the latter part of the nineteenth century. The increased use of fossil fuels has released large amounts of CO2, and the destruction of natural vegetation has prevented the environment from restoring the balance. Levels of other greenhouse gases, including CH4, N2 O and CFCs have also been rising. Since all of these gases have the ability to retain terrestrial radiation in the atmosphere, the net result should be a gradual increase in global temperatures. The link between recent warming and the enhancement of the greenhouse effect seems obvious. Most of the media, and many of those involved in the investigation and analysis of global climate change, seem to have accepted the relationship as a fait accompli. There are only a few dissenting voices, expressing misgivings about the nature of the evidence and the rapidity with which it has been embraced. A survey of environmental scientists involved in the study of the earth’s changing climate, conducted in the spring of 1989, revealed that many still had doubts about the extent of the warming. More than 60 per cent of those questioned indicated that they were not completely confident that the current warming was beyond the range of normal natural variations in global temperatures (Slade 1990). The greenhouse effect is brought about by the ability of the atmosphere to be selective in its response to different types of radiation. The atmosphere readily transmits solar radiation which is mainly short-wave energy from the ultraviolet end of the energy spectrum allowing it to pass through unaltered to heat the earth’s surface. The energy absorbed by the earth is reradiated into the atmosphere, but this terrestrial radiation is long-wave infrared, and instead of being transmitted it is absorbed, causing the temperature of the atmosphere to rise. Some of the energy absorbed in the atmosphere is returned to the earth’s surface, causing its temperature to rise also. This is considered similar to the way in which a greenhouse works allowing sunlight in, but trapping the resulting heat inside hence the use of the name ‘greenhouse effect’. In reality it is the glass in the greenhouse which allows the temperature to be maintained, by preventing the mixing of the warm air inside with the cold air outside. There is no such barrier to mixing in the real atmosphere, and some scientists have suggested that the processes are sufficiently different to preclude the use of the term ‘greenhouse effect’. Anthes et al. (1980) for example, prefer to use ‘atmospheric effect’. However, the use of the term ‘greenhouse effect’ to describe the ability of the atmosphere to absorb infrared energy is so well established that any change would cause needless confusion. The demand for change is not strong, and ‘greenhouse effect’ will continue to be used widely for descriptive purposes, although the analogy is not perfect. Without the greenhouse effect, global temperatures would be much lower than they are perhaps averaging only ? 17Â °C compared to the existing average of +15Â °C. This, then, is a very important characteristic of the atmosphere, yet it is made possible by a group of gases which together make up less than 1 per cent of the total volume of the atmosphere. There are about twenty of these greenhouse gases. Carbon dioxide is the most abundant, but methane, nitrous oxide, the chlorofluorocarbons and tropospheric ozone are potentially significant, although the impact of the ozone is limited by its variability and short life span. Water vapour also exhibits greenhouse properties, but it has received less attention in the greenhouse debate than the other gases since the very efficient natural recycling of water through the hydrologic cycle ensures that its atmospheric concentration is little affected by human activities. Any change in the volume of the greenhouse gases will disrupt the energy flow in the earth/atmosphere system, and this will be reflected in changing world temperatures. This is nothing new. Although the media sometimes seem to suggest that the greenhouse effect is a modern phenomenon, it is not. It has been a characteristic of the atmosphere for millions of years, sometimes more intense than it is now, sometimes less. Three of the principal greenhouse gases—CO2, methane (CH4) and the CFCs—contain carbon, one of the most common elements in the environment, and one which plays a major role in the greenhouse effect. It is present in all organic substances, and is a constituent of a great variety of compounds, ranging from relatively simple gases to very complex derivatives of petroleum hydrocarbons. The carbon in the environment is mobile, readily changing its affiliation with other elements in response to biological, chemical and physical processes. This mobility is controlled through a natural biogeochemical cycle which works to maintain a balance between the release of carbon compounds from their sources and their absorption in sinks. The natural carbon cycle is normally considered to be self-regulating, but with a time scale of the order of thousands of years. Over shorter periods, the cycle appears to be unbalanced, but that may be a reflection of an incomplete understanding of the processes involved or perhaps an indication of the presence of sinks or reservoirs still to be discovered (Moore and Bolin 1986). The carbon in the system moves between several major reservoirs. The atmosphere, for example, contains more than 750 billion tones of carbon at any given time, while 2,000 billion tones are stored on land, and close to 40,000 billion tones are contained in the oceans (Gribbin 1978). Living terrestrial organic matter is estimated to contain between 450 and 600 billion tones, somewhat less than that stored in the atmosphere (Moore and Bolin 1986). World fossil fuel reserves also constitute an important carbon reservoir of some 5,000 billion tones (McCarthy et al. 1986). They contain carbon which has not been active in the cycle for millions of years, but is now being reintroduced as a result of the growing demand for energy in modern society being met by the mining and burning of fossil fuels. It is being reactivated in the form of CO2, which is being released into the atmospheric reservoir in quantities sufficient to disrupt the natural flow of carbon in the environment. The greatest natural flow (or flux) is between the atmosphere and terrestrial biota and between the atmosphere and the oceans. Although these fluxes vary from time to time, they have no long-term impact on the greenhouse effect because they are an integral part of the earth/atmosphere system. In contrast, inputs to the atmosphere from fossil fuel consumption, although smaller than the natural flows, involve carbon which has not participated in the system for millions of years. When it is reintroduced, the system cannot cope immediately, and becomes unbalanced. The natural sinks are unable to absorb the new CO2 as rapidly as it is being produced. The excess remains in the atmosphere, to intensify the greenhouse effect, and thus contribute to global warming. The burning of fossil fuels adds more than 5 billion tones of CO2 to the atmosphere every year, with more than 90 per cent originating in North and Central America, Asia, Europe and the republics of the former USSR. Fossil fuel use remains the primary source of anthropogenic CO2 but augmenting that is the destruction of natural vegetation which causes the level of atmospheric CO2 to increase by reducing the amount recycled during photosynthesis. Photosynthesis is a process, shared by all green plants, by which solar energy is converted into chemical energy. It involves gaseous exchange. During the process, CO2 taken in through the plant leaves is broken down into carbon and oxygen. The carbon is retained by the plant while the oxygen is released into the atmosphere. The role of vegetation in controlling CO2 through photosynthesis is clearly indicated by variations in the levels of the gas during the growing season. Measurements at Mauna Loa Observatory in Hawaii show patterns in which CO2 concentrations are lower during the northern summer and higher during the northern winter. These variations reflect the effects of photosynthesis in the northern hemisphere, which contains the bulk of the world’s vegetation (Bolin 1986). Plants absorb CO2 during their summer growing phase, but not during their winter dormant period, and the difference is sufficient to cause semi-annual fluctuations in global CO2 levels. The clearing of vegetation raises CO2 levels indirectly through reduced photosynthesis, but CO2 is also added directly to the atmosphere by burning, by the decay of biomass and by the increased oxidation of carbon from the newly exposed soil. Such processes are estimated to be responsible for 5-20 per cent of current anthropogenic CO2 emissions (Waterstone 1993). This is usually considered a modern phenomenon, particularly prevalent in the tropical rainforests of South America and South-East Asia (Gribbin 1978), but Wilson (1978) has suggested that the pioneer agricultural settlement of North America, Australasia and South Africa in the second half of the nineteenth century made an important contribution to rising CO2 levels. This is supported to some extent by the observation that between 1850 and 1950 some 120 billion tones of carbon were released into the atmosphere as a result of deforestation and the destruction of other vegetation by fire (Stuiver 1978). The burning of fossil fuels produced only half that much CO2 over the same time period. Current estimates indicate that the atmospheric CO2 increase resulting from reduced photosynthesis and the clearing of vegetation is equivalent to about 1 billion tones per year (Moore and Bolin 1986), down slightly from the earlier value. However, the annual contribution from the burning of fossil fuels is almost ten times what it was in the years between 1850 and 1950. Although the total annual input of CO2 to the atmosphere is of the order of 6 billion tonnes, the atmospheric CO2 level increases by only about 2. billion tonnes per year. The difference is distributed to the oceans, to terrestrial biota and to other sinks as yet unknown (Moore and Bolin 1986). Although the oceans are commonly considered to absorb 2. 5 billion tonnes of CO2 per year, recent studies suggest that the actual total may be only half that amount (Taylor 1992). The destination of the remainder has important implications for the study of the greenhouse effect, and continues to be investigated. The oceans absorb the CO2 in a variety of ways—some as a result of photosynthesis in phytoplankton, some through nutritional processes which allow marine organisms to grow calcium carbonate shells or skeletons, and some by direct diffusion at the air/ocean interface (McCarthey et al. 1986). The mixing of the ocean waters causes the redistribution of the absorbed CO2. In polar latitudes, for example, the added carbon sinks along with the cold surface waters in that region, whereas in warmer latitudes carbon-rich waters well up towards the surface allowing the CO2 to escape again. The turnover of the deep ocean waters is relatively slow, however, and carbon carried there in the sinking water or in the skeletons of dead marine organisms remains in storage for hundreds of years. More rapid mixing takes place through surface ocean currents such as the Gulf Stream, but in general the sea responds only slowly to changes in atmospheric CO2 levels. This may explain the apparent inability of the oceans to absorb more than 40-50 per cent of the CO2 added to the atmosphere by human activities, although it has the capacity to absorb all of the additional carbon (Moore and Bolin 1986). The oceans constitute the largest active reservoir of carbon in the earth/atmosphere system, and their ability to absorb CO2 is not in doubt. However, the specific mechanisms involved are now recognized as extremely complex, requiring more research into the interactions between the atmosphere, ocean and biosphere if they are to be better understood (Crane and Liss 1985). Palaeoenvironmental evidence suggests that the greenhouse effect fluctuated quite considerably in the past. In the Quaternary era, for example, it was less intense during glacial periods than during the interglacials (Bach 1976; Pisias and Imbrie 1986). Present concern is with its increasing intensity and the associated global warming. The rising concentration of atmospheric CO2 is usually identified as the main culprit, although it is not the most powerful of the greenhouse gases. It is the most abundant, however, and its concentration is increasing rapidly. As a result, it is considered likely to give a good indication of the trend of the climatic impact of the greenhouse effect, if not its exact magnitude. Svante Arrhenius, a Swedish chemist, is usually credited with being the first to recognize that an increase in CO2 would lead to global warming (Bolin 1986; Bach 1976; Crane and Liss 1985). Other scientists, including John Tyndall in Britain and T. C. Chamberlin in America (Jones and Henderson-Sellers 1990), also investigated the link, but Arrhenius provided the first quantitative predictions of the rise in temperature (Idso 1981; Crane and Liss 1985). He published his findings at the beginning of this century, at a time when the environmental implications of the Industrial Revolution were just beginning to be appreciated. Little attention was paid to the potential impact of increased levels of CO2 on the earth’s radiation climate for some time after that, however, and the estimates of CO2 -induced temperature increases calculated by Arrhenius in 1903 were not bettered until the early 1960s (Bolin 1986). Occasional papers on the topic appeared, but interest only began to increase significantly in the early 1970s, as part of a growing appreciation of the potentially dire consequences of human interference in the environment. Increased CO2 production and rising atmospheric turbidity were recognized as two important elements capable of causing changes in climate. The former had the potential to cause greater warming, whereas the latter was considered more likely to cause cooling (Schneider, 1987). For a time it seemed that the cooling would dominate (Ponte 1976), but results from a growing number of investigations into greenhouse warming, published in the early 1980s, changed that (e. g. Idso 1981; Schneider 1987; Mitchell 1983). They revealed that scientists had generally underestimated the speed with which the greenhouse effect was intensifying, and had failed to appreciate the impact of the subsequent global warming on the environment or on human activities.

Friday, September 20, 2019

Tiny Encryption Algorithm Tea Computer Science Essay

Tiny Encryption Algorithm Tea Computer Science Essay Today, security is an issue concern by everyone. Many ways of implementing encryption algorithms have been investigated in order to achieve better performance in terms of security level, speed, power consumption and cost. This project will discuss about implementing Tiny Encryption Algorithm (TEA) using Field Programmable Gate Array (FPGA). FPGA are reconfigurable chips that the integrated circuit is designed meant for reconfigurable architecture. A FPGA chips is programmed using Hardware Description Language (HDL). TEA is an encryption algorithm or block cipher that consider fast, easy and used for many application. In this project, TEA will be implemented on Altera Cyclone II FPGA using Altera DE1 Board. Keyboard using PS2 or the SWITCH on the DE1 will be used as input. The output of the encryption and decryption data will be show on VGA monitor. The encrypted data will be store in memory. Specific Objectives In order to complete this project, there are few objectives have to be archieve. Program the Tiny Encryption Algorithm (TEA) using verilog HDL (Hardware Description Language) Verifying the functionality of the implementation of the encryption in FPGA Perform simulation for timing analysis and the encryption process on the implementation of Tiny Encryption Algorithm (TEA) in FPGA Experiment and test the project in practical Literature Research Cryptography Before the modern era, security communication is the primary concern in Government and Military[2]. Security communication become more important today as a result of the increasing use of the electronic communication for many daily activities such as internet banking, online shopping. Cryptography is a practical way of conveying information securely [1]. The main aim of cryptography is to allow authorized person to receive the message correctly while preventing eavesdroppers understanding the content of the message [1]. The original message is called plaintext t[1]. Plaintext will be encrypted using certain algorithms in the secure system in order to hide the meaning[1]. The output of this reversible mathematical process is called ciphertext and the algorithm used in this process is called cipher [1]. Ciphertext can be transmitted securely because ideally eavesdroppers that access to the ciphertext wont understand what the meaning is behind [1]. The reverse of this mathematical proce ss is to decrypt the ciphertext back to plaintext and this only can be done by the original recipients [1]. The processes of encryption and decryption are shown in Figure 1. Eavesdropper Plaintext Encryption Ciphertext Plaintext Decryption Figure 1 Encryption There are two types of encryption or cipher depends on the key used: Asymmetric key and Symmetric key. Symmetric key The encryption and decryption process use the same key [1]. The major problems and drawback of this key both sender and receiver must know the key prior to the transmissions [1]. If the key is transmitted then it will compromise the systems security [1]. The advantages of symmetric key is the process of encryption and decryption will be faster compare to asymmetric key, in another words it can encrypt or decrypt more data in shorter period of time [1]. Asymmetric key The encryption and decryption process use different key but both of the key are related mathematically [1]. It is very hard to obtain one from the other although they are mathematically related [1]. The public key is used for the encryption process and the private key is used for the decryption process [1]. The security of the system wont be compromised even though the public key is made available but the corresponding private key cannot be revealed to anyone [1]. Symmetric key Symmetric key is further divided into two types: Symmetric Cipher and Block Cipher. Stream Cipher Stream cipher that generates a keystream (a sequence of bits used as a key) [4]. The encryption process is usually done by combining the keystream with plaintext using bitwise XOR operation [4]. Keystream that generated is independent of the plaintext and ciphertext is called synchronous stream cipher while keystream that is generated is depent of plaintext is called self-synchronizing stream cipher [4]. Block Cipher Stream cipher that generates a keystream encrypt fixed length block of plaintext into block ciphertext that is same length [3]. The fix length is called block size. Block Cipher using same secret key for the encryption and decryption process [3]. Usually, the size of block cipher is 64 bits [3]. By increasing the size of block cipher to 128 bits will make the processors become more sophisticated [3]. Stream Cipher vs Block Cipher Stream cipher is a type of symmetric encryption algorithm that can be designed to be exceptionally fast and even much faster compare to block cipher [4]. Stream ciphers normally process on less bits while block ciphers can process large blocks of data [4]. Plaintext that encrypted using block cipher will result in the same ciphertext when the same key is used [4]. With a stream cipher, the transformation of thse smaller plaintext units will vary depending on when they are encountered during the encryption process [4]. Stream Cipher Block Cipher Block Size Depends Fixed Encryption/Decryption Speed Fast Slower Size of block data can be process Small Larger Figure 2: Comparison of Stream Cipher and Block Cipher Figure 3 below shows different type of algorithm table.jpgFigure 3 :Different type of encryption algorithm Tiny Encryption Algorithm is implemented in this project because it is one type of cipher encryption algorithm that encrypt 64 bits of plaintext using a 128 bits of key into a 64 bits ciphertext. TEA Tiny Encryption Algorithm (TEA) is a Feistel type routine designed by David J. Wheeler and Roger M. Needham. It used addition and subtraction as the reversible operators [5]. XOR and ADD alternately used in the routine provide nonlinearity [5]. The Dual bit shifting in the routine cause all the bits and data mixed repeatedly [5]. The three XOR, ADD and SHIFT operation will provide Shannons properties of diffusion and confusion necessary for a secure block cipher without the need for P-boxes and S-boxes [6]. TEA is a feistel cipher that split the plaintext into halves [7]. A sub key will be applied to the one half of plaintext in the round function, F [8]. Then the output of the F will be XOR with other half before the two halves are swapped [8]. All same patterns applied to the entire round except the last round where there is often no swap [8]. Figure 2 below show a Feistel cipher diagram where 64 bits of plaintext is divided into halves which are equally 32 bits each part. 128 bits of key is used for the encryption and decryption process and it is spitted into 32 bits subkey [7]. TEA.png Figure 4: Two Fiestal round(one cycle) of TEA The encryption and decryption routine of Tiny Encryption Algorithm (TEA) written in C language [5]. void encrypt (uint32_t* v, uint32_t* k, uint32_t* v1) { uint32_t v0=v[0], sum=0, i; /* set up */ uint32_t delta=0x9e3779b9; /* a key schedule constant */ uint32_t k0=k[0], k1=k[1], k2=k[2], k3=k[3]; /* cache key */ for (i=0; i sum += delta; v0 += ((v1>5) + k1); v1 += ((v0>5) + k3); } /* end cycle */ v[0]=v0; v[1]=v1; } void decrypt (uint32_t* v, uint32_t* k, uint32_t* v1) { uint32_t v0=v[0], sum=0xC6EF3720, i; /* set up */ uint32_t delta=0x9e3779b9; /* a key schedule constant */ uint32_t k0=k[0], k1=k[1], k2=k[2], k3=k[3]; /* cache key */ for (i=0; i v1 -= ((v0>5) + k3); v0 -= ((v1>5) + k1); sum -= delta; } /* end cycle */ v[0]=v0; v[1]=v1; } [5] delta is derived from the golden number where delta = Architectures Untitled.jpg Figure 5: TEA architectures TEA is implemented using three different architectures. The first architecture (Figure 3a) is a multiple 32 bit adders that simultaneously perform operations needed for one encryption cycle [6]. This parallel form structure should be quite large in terms of hardware area but will perform faster [6]. On the other hands, in order to reduce the area, the second architecture (Figure 3b) performs operations sequentially using a single 32 bit adder [6]. The last design (Figure 3c) is a 8 bit digit-serial adders that use advance architecture offered by application-specific hardware solution [6]. The latter two design are meant for low area solutions but in terms of control and data selection, the effectiveness remain confirmed [6]. Software vs Hardware Implementation of Encryption Implementation of encryption using software is easier to design and upgrade, it also portable and flexible [7]. One of the major problems of software implementation is in most typical personal computer have external memory out from the processor, the external memory is used to store raw data or instruction in unencrypted form so if an attacker gain access to the system, the key can be easier obtained [7]. One of the most common way used by the attacker is bruteforce, a special program can be easily design to bruteforce the algorithm. Besides this, reverse engineering method easier to apply on software implementation. So it can be concluded that software implementation is lack of physical security[7]. Implementation of encryption using hardware by naturally is physically more secure as they are hard to read and view by attacker [7]. Another advantage of hardware implementation is all the data in the encryption process is correlated according to an algorithm which usually perform operation on same data [7]. This will prevent computer technique such as out of order execution and cause hang to the system [7]. Hardware implementation also tend to be more parallel so more orders of magnitudes can be done at certain period of time [7]. Hardware implementation is will be better choice for encryption in terms of performance but the cost of implementation is higher compare to software implementation. Higher security level and better performance is the main concern in this project, so the encryption will be implemented on FPGA, one of the hardware implementation method. Microcontroller, Microprocessor, DSP Processor and FPGA Microprocessor The first microprocessors invented in the 1970s [10]. This is the first time where such an amazing devices put a computer CPU onto a single IC [10]. The significant processing was available at rather low cost, in comparatively small space [10]. At beginning stage, all other functions, like input/output interfacing and memory were outside the microprocessor [10]. Gradually all the other functions in embedded into a single chip [10]. At the same time, microprocessor becoming more powerful in terms on the speed, power consumption and so on [10]. Microprocessor is moving rapidly from 8 bits to 32 bits [10]. Microcontroller A microcontroller is an inexpensive single-chip computer [9]. The entire computer system lies within the confines of the integrated circuit chip, so it is called a single chip computer [9]. The microcontroller on the encapsulated sliver of silicon has features similar to those personal computers [9]. Mainly, the microcontroller is able to store and run a program [9]. The microcontroller contains a CPU (central processing unit), ROM (random-access memory), RAM (random-access memory), Input/Output lines, and oscillator, serial and parallel ports [9]. Some more advanced microcontroller also have other built in peripherals such as A/D (analog-to-digital) converter [9]. DSP (Digital Signal Processing) Processor DSP processor is a specialized microprocessor optimized to process digital signal [12][13]. Most of the DSP processors are commonly designed to have basic features such as high performance, repetitive and numerically intensive tasks so DSP processor often have advantage in terms of speed, cost and energy efficiency [11]. DSP processor have the avility to perform one or more multiply accumulate operations (often called MACs) in a single instruction cycle [14]. FPGA (Field Programmable Gate Array) Xilinx Co-Founders, Ross Freeman and Bernard Vonderschmitt, invented the first commercially viable field programmable gate array in 1985 the XC2064. FPGA is integrated circuit for reconfigurable purposes by user after manufacturer. FPGA is generally specified using Hardware Description language (HDL). FPGA can be programmed to perform logic function and due to this ability, FPGA become more popular. Using FPGA for design can lower non recurring Engineering cost and apply on many application. Hardware Architectures comparison The figure 6 below show the comparison of different architectures used for hardware implementation on encryption. Architecture Efficiency Performance Non recurring Engineering Cost Unit Cost Microprocessor Low Low Low Low Microcontroller Low Low Low Low DSP processor Moderate Moderate Low Moderate FPGA High High Low High Figure 6: Architectures Comparison Comparing the four architectures above, FPGA have the advantage in terms of the efficiency Performance but the unit cost is high. Since costing is not a major concern in this project, so FPGA is better choice for implementing Tiny Encryption Algorithm. Altera DE1 Development and Education Board Altera DE1 is a FPGA Development and Education Board that will be used for this project [17]. Below is the features of this board: DE1_intro_500x.png Figure 7: Altera DE1 Board Altera Cyclone II 2C20 FPGA with 20000 LEs Altera Serial Configuration deivices (EPCS4) for Cyclone II 2C20 USB Blaster built in on board for programming and user API controlling JTAG Mode and AS Mode are supported 8Mbyte (1M x 4 x 16) SDRAM 4Mbyte Flash Memory 512Kbyte(256Kx16) SRAM SD Card Socket 4 Push-button switches 10 DPDT switches 8 Green User LEDs 10 Red User LEDs 4 Seven-segment LED displays 50MHz oscillator ,24MHz oscillator ,27MHz oscillator and external clock sources 24-bit CD-Quality Audio CODEC with line-in, line-out, and microphone-in jacks VGA DAC (4-bit R-2R per channel) with VGA out connector RS-232 Transceiver and 9-pin connector PS/2 mouse/keyboard connector Two 40-pin Expansion Headers DE1 Lab CD-ROM which contains many examples with source code Size ¼Ã… ¡153*153 mm There are few features of DE1 Board will be used for this project. PS/2 mouse/keyboard connector PS/2 keyboard is used as input for the plaintext 4 Push button switches used as a reset button VGA DAC (4-bit R-2R per channel) with VGA out connector VGA monitor is connected to the DE1 board to show the input of plaintext and the output of the encryption, cipher text 4Mbyte Flash Memory Used to store the ciphertext VGA controller IBM introduce video display standard called VGA (video graphics array) in the late 1980s that widely supported by PC graphics hardware and monitors [18]. Figure 8: Simplified Block Diagram of VGA Controller The vga_sync circuit generates timing and synchronization signals [18]. The hsync and vsync signals are connected to the VGA port to control the horizontal and vertical scans of the monitor [18]. Two signals which are pixel_x and pixel_y are decoded from the internal counters [18]. The pixel_x and pixel_y signals indicate the relative positions of the scans and essentially specify the location of the current pixel [18]. Videl_on signal is generated from vga_sync to check whether the display is enable or disable [18]. The pixel generation circuit generate three video signal which is RGB signal [18]. The current coordinates of the pixel (pixel_x and pixel_y), external control and data signals determine the color value [18]. PS/2 Controller IBM introduced PS2 port in personal computers [18]. It is a widely used interface for keyboard and mouse to communicate with the host [18]. PS2 port consists of two wires for communication purposes [18]. One wire for transmitting data in serial stream while another wire is for the clock information which determine when the data is valid and can be retrieved [18]. The data is transmitted in 11 bit packet that contains 8 bits of data, an odd parity bit and stop bit [18]. Figure 9: Timing Diagram of a PS/2 port Quartus II Web Edition Quartus II Web Edition design software is a comprehensive environment available for system-on-a-programmable-chip (SOPC) design developed by Altera [19]. This software is used in this project to program and implement the Tiny Encryption Algorithm (TEA) on Altera DE1 Cyclone II FPGA [19]. This program also can be used for the simulation and timing analysis [19]. Hardware Description Language (HDL) Hard description language (HDL) is a type of programming languages used to program and describe digital logic or electronic circuits [20]. It can describe circuit operation, its design and organization [20]. Figure 10 below shows different type of Hardware Description Language commonly used. HDL Syntax Similarity AHDL Ada programming Language VHDL Ada Programming Language JHDL Java Verilog C Programming Language Figure 10 : Different type of HDL Verilog Hardware Description Language (HDL) is used to program the FPGA in this project because it is a widely used HDL and it syntax is similar the C programming language. Methodology Block Diagram VGA Monitor PS/2 Keyboard VGA Controller Plaintext TEA Encryption Core Flash Memory 64 Bits Ciphertext PS/2 Controller Key 128 Bits 64 Bits Encryption/Decryption Acknowledge Key Update Request Busy Asynchronous Reset Clock Figure 11: Core Module The Blog Diagram above explains the design of this project. PS/2 keyboard used as input for the plaintext. All the data from the PS/2 keyboard will be sent into PS/2 controller to process. The processed data, 128 Bits or key or 64 Bits of plaintext will sent into the TEA encryption core for encryption. The output of the encryption, ciphertext will store inside the flash memory. All the plaintext and cipher text will send into VGA controller to process and show on the CRT monitor. The encryption/decryption will be connected to the DPDT switch to switch between encryption or decryption mode. Key Update Request also connected to the DPDT switch for the purpose of updating the key when the switch is on. Asynchronous reset is connected to the push button for the reset purpose. There are internal clock inside the DE1 board so no external clock is needed for this project. Algorithm and Implementation Verification The original Tiny Encryption Algorithm C source code by the author will be compiled or get a compiled executable program from other source to analyze the encryption of plaintext to ciphertext and decryption of ciphertext back to plaintext. A set of plaintext, ciphertext and key can generated from the program as a reference and compare with the encryption and decryption output implemented on FPGA. Figure 12 is an example of compiled executable program of Tiny Encryption Algorithm by Andreas Jonsson TEA.jpg Figure 12 Costing Estimation Components Quantity Price Altera De1 Board [17] 1 RM 512.84 Used 15 Samsung SyncMaster CRT monitor 1 RM50.00 Used PS/2 Keyboard 1 RM10.00 Total RM572.84 Gantt Chart ganchart.jpg Research analysis will be start from week 6 till week 8. Verilog coding on the implementation of TEA and module and test bench verification this 2 task must perform parallel because after finish a certain module, it should be test and simulate. If simulation or test is done after finish the whole coding, there will be a big problem in debugging the error. The synthesis of PS/2 keyboard, VGA monitor and FPGA start week 20 just before finish the coding. The functionality verification task also runs parallel with the synthesis optimization task. References and Figures Figures Figure 4: Tiny Encryption Algorithm .Available at: http://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm (Accessed: 30 October 2009) Figure 5: Israsena. P, Design and Implementation of Low Power Hardware Encryption for Low Cost Secure RFID Using TEA . Information, Communications and Signal Processing, 2005 Fifth International Conference on 0-0 0 Page(s):1402 1406, DOI 10.1109/ICICS.2005.1689288. Available at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=arnumber=1689288isnumber=35625 (Accessed : 26 October 2009) Figure 7: Available at: http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=EnglishNo=83 ( Accessed : 28 October 2009) Figure 8: Pong P. Chu (2008) FPGA Prototyping by Verilog Examples :John Wiley Sons Figure 9: Pong P. Chu (2008) FPGA Prototyping by Verilog Examples :John Wiley Sons

Thursday, September 19, 2019

A Future Free of Paper -- Internet Technology Papers

A Future Free of Paper Cyberculture is a term that over 15 years ago you would have not given much thought to. Computers were still foreign and scary to many of us then. We saw this big object, that seemed to be able to zap anything into some dark space and beyond. Now, computers are involved in our lives everyday in some way. We use them at work, for school, and even still for the pure enjoyment of them. They are slowly creeping into our lives and taking over our everyday task. I no longer need to balance my checkbook, the computer will do it for me. When ever I need to know a fact fast, I no longer need to pour over the encyclopedia. I can just ask "Jeeves", who has the answers for almost everything. Sure, there are still those diehard fans of doing everything by hand, but eventually they will give in and join the rest of us cyberfreaks. Does this mean it is the end for anything that does not jump on the cyberculture ban wagon? Will we be able to function on technology alone? These are questions that I believe aren't that far into the future. With everything turning to technology, it seems if you do not join you are quickly left behind. Cyberculture is our future and it will only advance with more tools and more functions becoming available to us. What is the Cyberculture future and what will it effect? I believe cyberculture is going to effect everything we do and the manner in which we accomplish it. If we look at the way it has effected our reading and writing as of today, we can already see the change that has occurred and probably the direction we will be heading into. Computers have taken over what paper used to reign and adding it's own touches along the way. First look at the way we communicate with othe... ...gh, this will have an effect on the sale of books. There are many tools that are still not in play that I believe will effect the future of books. With technology changing everyday and growing rapidly it will only be a matter of time before all the tools will be in place and of course there will always be those diehard book readers who want the real thing in their hands and in front of them. They will slowly be weeded out though, as the next generation grows with the technology and looks at a real book as a great way to fix that lopsided table leg. Works Cited Evelyn Tribble, and Anne Trubek, eds. Writing Material Readings from Plato to the Digital Age. New York: Longman, 2003. Landow, George. Twenty Minutes into the Future, or How Are We Moving Beyond the Book? Tribble and Trubek 214-226. Lesser, Wendy. The Conversion. Tribble and Trubek 227-232.

Wednesday, September 18, 2019

Perl: A Popular Scripting Language :: Computers

Perl: A Popular Scripting Language Perl was created under strange circumstances, it was never intended to be a widely used public language but the features it provided caused many programmers to crave for more. Larry Wall initially created Perl to produce reports from a â€Å"Usenet- news-like hierarchy of files for a bug-reporting system.† 1 Apparently awk and sed could not handle the task. Larry decided to fix this problem with a C application now known as Perl, Practical Extraction and Report Language. Perl grew at the same rate as the UNIX operating system. It became portable as new features were added. Perl now has extensive documentation available in different man pages. Perl is growing now just as every widely used programming language. Perl is known for its management of data. It can manipulate files and directories and manage tasks. It can easily analyze results from other applications including sorting large files that would take a human a long time. Perl is generally used for its scripting abilities. String manipulation is much smoother using Perl than using imperative languages like Java and C. The data representation of numeric data in Perl is a little different than other languages. All numeric data is a double precision floating point value in Perl. For this reason it would not be a good idea to solve complex mathematical problems with Perl because it would be much slower than using a regular imperative language. String values are sequences of characters as in most other languages. The convention for scalar variable identifiers is a dollar sign followed by a character followed by a sequence of underscores and alphanumeric values. Scalar variables can contain a single value representing a number, string, or reference. For example â€Å"$a = â€Å"hello†Ã¢â‚¬  is just as valid as â€Å"$b1 = 3.4†. Perl has numerous built in functions and it allows for user defined subprograms. Subprograms are an example of data abstraction. To define a subprogram you use the convention if â€Å"sub subname { statements; }†. To pass parameters you call subname(arg1, arg2). To access the parameters is a little different, you must get the values from the temporary @_ array. The @_ array is private to the subprogram. Subprograms can return variables and can have their own private variables. Perl can have input from files and keyboards and can have output to files and screens.

Tuesday, September 17, 2019

Open Polytechnic Nz Operations Management Assignment 1 T3 2012

| 71232 Operations Management| Assignment 1| | Matt Hinkley 3319696| 12/10/2012| | Contents Question 1: Operations management role2 Question 2: Types of production4 Question 3: Environmental factors5 Question 4: Strategic options [case provided]6 Case question 1 (customers)6 Case question 2 (competitors)6 Case question 3 (strategic decisions)6 Case question 4 (expansion issues)7 Question 5: Measures of quality8 Question 6: Types of quality management9 Works Cited10 Figures Figure 1: Company structure2 Tables Table 1: Quality characteristic measurements8Part A: Nature of operations management Question 1: Operations management role I would imagine that my role would be to ensure the profitable and reliable running of services. A bus service is a continuous service which runs to a schedule on a predetermined route. Our customers base their movements around our schedule and will expect us to adhere to those times. Figure 1 below is an indication of the simplified assumed structure of the company essay writers world reviews. I have not allowed for maintenance workshops and such like and have assumed that these functions are outsourced to suppliers.Figure [ 1 ]: Company structure The interactions between the departments are on a two way information route and feedback is gathered from the customer by the frontline staff. This could also include the drivers or ticket staff. The long term strategies of the company would be managed by the CEO and their senior team which would then be fed down to the operations manager for the day to day management to deliver these goals. Operations would see to the efficient running of the services and provide any early warning signs in their reports to the senior management.An interface with the customer would also be recommended by way of an occasional MBWA (management by walking about) style. This enables a personal interaction with both the staff and the customer. Close relations with suppliers should be kept with the finance side of the relationship being managed by the accounts department. Maintaining this degree of separation enables the ‘good cop bad cop’ kind of relations which can be of great benefit when bills come due. Question 2: Types of production The bus company is a transportation operation, as it transports people. It works as a mass services production process type.This is demonstrated in the fact that it has many customer transactions, involving limited contact time and little customisation (Nigel Slack, 2011). It does not store stock but you could argue that resources are stored in the form of bus spares and fuel should they have their own depot rather than outsourcing these items. The customers are queued in as much as the wait to be picked up on the route but they are not defined in a customer list or database. Question 3: Environmental factors As with any business there are more than one company vying for the limited number of consumers.Running busses you will have very little s cope to be better than your competition so you need to be very careful how you do it. There will be a couple main reasons why your customer chooses you which will be price, route, condition of vehicle and convenience. Breaking the four task environments down and assuming that there is a counter to each of the factors, we can reduce the impact as follows: Competitors: You could be cheaper or more regular than the competition, perhaps have newer buses which don’t smoke so badly as the others.Try to offer services on the routes which the competitors would struggle to compete on. Customers: The customer is king (or Queen). The most effective way to encourage New Zealand customers is by price. Kiwis love a bargain (Edmunds, 2012). Be it the one day special or concessions for demographic or regular users. But your price will not matter if the route is in the wrong place so location is a major factor when looking for the target customer. I’d probably hit high density student areas, they have low income and require transport regularly.Suppliers: The suppliers in this case would be our vehicle manufacturers, stationary suppliers and most probably land to operate from. You can enter into long term contracts with the suppliers and bulk order to reduce costs. But as with every business we are our suppliers’ customer so shop around. The only restraint is location. Buses are easy to buy direct from Japan with parts sourced just as easily. Suppliers will cause your variable costs to change and as such will have a greater impact on your margin which needs to be passed on to your customer.Building good solid relationships is imperative. Labour market: â€Å"The central bank expects unemployment of 7. 1 percent in the March 2013 year, falling to 5. 9 percent in 2014 and 4. 9 percent in 2015, according to forecasts in the MPS. That's more pessimistic than the 6. 4 percent, 5. 3 percent and 4. 9 percent forecasts in September† (BusinessDesk. co. nz, 20 12). With unemployment predicted to be falling and the labour market choosing to head to Australia in droves it makes the pickings slim.A business like the bus company will require skilled trades people to service the busses unless it out sources this and clerical people to administer the day to day operations. This is on top of the drivers and management team. Labour is a large cost for the company and retention is a big cost reducer, by keeping training and trained staff within the organisation. I fly to PNG where I work on a mine site, every time I get into a conversation with the bus driver taking me between international and domestic they ask if I can get them a job as a driver on the mine site.Question 4: Strategic options [case provided] Case question 1 (customers) Currently the customer base for HollyRock is teens and school aged youths. They have been referred to in the article as â€Å"Young people†. There may be some parents who also attend the restaurant but I wou ld assume from the way the article reads this would be in accompaniment of younger people. Case question 2 (competitors) From the article we can see that there are two possible current competitors in the area.These are the pizzeria which serves similar food to HolyRock in as much as Pizza and Robb’s restaurant which opens Friday evenings also. Although neither of these are competing for the same demographic as HolyRocks but they do have similarities in goods and services. It is also mentioned in the article that fast food chains had had difficulty in the past gaining approval to operate but, in time these may be able to move into the area. Case question 3 (strategic decisions) To fully answer this question we should look at the details for each component: 1.Structural. a. Location: Large old house in the middle of a retail area 15mins walk from the schools. b. Capacity: Ample parking and facility to seat 75. c. Technology: Low to mid technology level. 2. Infrastructure d. Wor k-force: 3 staff, cook, counter staff e. Quality Management: Nothing is mentioned regarding the Quality Management but I would assume this would be handled on a customer feedback system. f. Organisation design: A flat structure with an owner manager. Compact enough to manage easily and able to adapt to its target audience easily. . Policies and procedures: There was no mention of policies. Procedures are simple with food orders being taken with the issue of a number, empties and waste is collected on a continuous service system. Events seem to managed by the customer with a board in place for bands to volunteer to play. The initial concept was for the local young people to have somewhere safe to be able to gather and ‘Hang out’. The structure of the business would support the initial concept in that it is simple to manage and adapt to the needs of the client.It has furniture that is moveable to accommodate the groups at the time and the venue offers enough space to cate r for the needs of the customer. If the organisation were to grow past the current system then other changes would need to be brought into place which would then mean tighter management would be needed which would most likely mean a change in infrastructure. I would therefore say that the decisions do support each other and that of the overall strategy of HolyRocks. Case question 4 (expansion issues)The proposed enhancements would step completely outside of the current company structure. Although the base idea is similar in so much as it is a supply of food to customers the demographic is vastly separated. Some of the issues to consider are as follows: * Direct completion with Robb’s restaurant. An already well-established lunch and breakfast coffee time shop. * Is the location right for the stay at home mum? We note that it’s close to the high school, but there is no mention of other facilities which would attract the new client base. * Interior decor.Do rock posters and picnic tables attract stay at home mums looking for a coffee, a chat and some finger food? * With younger children coming onto the premises are there implications to the high school kids being turned off the idea of it being a ‘Hang out’? * Suppliers for the different food types will possibly differ, so more contracts need to be administered and accounts. * Extra equipment will be needed for coffee production and the storage and display of finger foods. As these are generally uncooked foods they need to be stored separately from the other food types.Different skills/personalities of staff required. Although there may be more intricate details regarding food, health and hygiene legislation the main points to consider are the local competition and the site suitability for the operation. It may be worth considering the option but at another location and sponsoring the new location with some brand attachment. Part B: Quality management Question 5: Measures of quality Us ing the table system as shown in the set text, the quality characteristics which we can measure would be the following: Quality characteristic| Variable| Attribute|Functionality| Number of meals served| Was the food acceptable| Appearance| Number of seats and layout| Were they clean in a timely manner| Reliability| Bands playing or meals delivered on time| Were there any complaints| Durability| Is the venue keeping up with the times| Are the trends of the young people being followed| Recovery| Meals rectified or bands removed| Did the customer feel the staff acted accordingly and timely| Contact| The extent that customers feel well treated by staff (1 to 5 scale)| Did the customers feel that the staff were helpful (yes or no)| Table [ 1 ]: Quality characteristic measurementsWe could easily apply functionality, appearance and contact to this business with contact being our quantitative measure. Functionality would be measured on the number of meals served against the number returned due to poor quality. Appearance would be a general measure by the management as to the tidiness of the venue throughout the shift. Contact could be measured through a quick and easy 2 question tick slip with the customer at the end of their visit. This could be a voluntary measure as people with high opinions are certain to leave feedback if it is made easy for them. Question 6: Types of quality managementThere are a range of available approaches such as TQM, Six Sigma and ISO 9000. Briefly each of the systems are as follows. Total Quality Management (TQM) is a comprehensive and structured approach to organizational management that seeks to improve the quality of products and services through ongoing refinements in response to continuous feedback. TQM requirements may be defined separately for a particular organization or may be in adherence to established standards. TQM can be applied to any type of organization; it originated in the manufacturing sector and has since been adapted for use in almost every type of organization, TQM is based on uality management from the customer's point of view (Rouse, Total-Quality-Management, 2005). Six Sigma is a management philosophy developed by Motorola that emphasizes setting extremely high objectives, collecting data, and analyzing results to a fine degree as a way to reduce defects in products and services. The Greek letter sigma is sometimes used to denote variation from a standard. The philosophy behind Six Sigma is that if you measure how many defects are in a process, you can figure out how to systematically eliminate them and get as close to perfection as possible.In order for a company to achieve Six Sigma, it cannot produce more than 3. 4 defects per million opportunities, where an opportunity is defined as a chance for nonconformance (Rouse, Six-Sigma, 2006). ISO 9000 is a series of standards, developed and published by the International Organization for Standardization (ISO), that define, establish, and mainta in an effective quality assurance system for manufacturing and service industries. The ISO 9000 standard is the most widely known and has perhaps had the most impact of the 13,000 standards published by the ISO.It serves many different industries and organizations as a guide to quality products, service, and management (Rouse, ISO-9000, 2005). From the three approaches above only two would lend themselves to our diner environment, these are the TQM and the ISO-9000 approaches. The Six Sigma philosophy is extremely complex to implement and can take years to show any real savings from a financial perspective. It is also not appropriate to our scenario as it better suits mass production or production line businesses. Of the two which are left I would use the TQM approach.It has a far better management system and would suit this small close knit workforce. The customer feedback would be available as to monitor the results and give indicators to the improvement strategy. The ISO-9000 sys tem is a more formal and managerially implemented system which would detract from the empowerment of the employees in this case, although there may well be some standards within the ISO-9000 that could be used in the TQM structure. Works Cited BusinessDesk. co. nz. (2012, December 6). NZ official unemployment rate overstates labour market woes, RBNZ says. Retrieved December 10, 2012, from www. sharechat. co. nz:

Monday, September 16, 2019

Research method paper: impact of tourism on local communities Essay

Impact of Ecotourism on Local Communities Table of Contents Table of Figures Section1: Introduction With the main objective of promoting responsible travel to natural areas, the well-being of communities and the environmental conservation, ecotourism is presented as an alternative type of tourism which is growing incredibly fast (Scheyvens, 1999). One of the objectives of ecotourism is to bring benefits to local communities. The important relationship between ecotourism and local communities could be explained by the fact that traditional homelands of indigenous people are usually the most natural and least developed areas of the world (Coria & Calfucura, 2012) The paper is first reviewing the different impacts that ecotourism can have on local communities. The development of ecotourism can have an important economic impact and can generate income, employment and business opportunities (Yacob, Shuib, & Radam, 2008). Thus, several developing countries have adopted ecotourism with the hope to improve their economy in an environmentally sustainable manner (Coria & Calfucura, 2012). In the second part, a deep analysis of the methodology of three articles about the impact of ecotourism on local communities has been done. The analysis shows both strengths and weaknesses of each types of methodologies that were used and help then to determine which one would be the most suitable when writing an undergraduate dissertation with similar aim. Concerning the methodology that was used for this project, the information was mainly taken from university databases, academic journals and reports as well as the research methods books to help the analyze of the methodologies. Section 2: Literature Review Ecotourism is being proposed as a strategy that will help to resolve social  and economic issues encountered by local communities, and as an adequate and effective way of of conserving the environment (Garrod, 2003). Thus, this concept has been adopted by many developing countries with the hope that it will bring them economic benefits (Coria & Calfucura, 2012). However, several authors wondered whether local communities are really beneficiating from those benefits (Jones, 2005). Sheyvens (1999) also agree on the fact that to ensure the process of ecotourism will be a success only if they are sharing the benefits of it. The reasons why local communities should consider the ecotourism include becoming aware of natural attractions value, understanding the necessity for sustainable tourism and the environment conservation. Also, several benefits should be taken into consideration such as the additional revenues that it could generate for any local types of business as well as the incr ease in employment opportunities and the enhancement of their culture. Unfortunately, even though ecotourism is bringing benefits, some drawbacks have to be taken into consideration. For instance, host communities do not participate a lot in decision making; they are also sometimes exploited for the resources without receiving any benefits, it can damage their community cohesion and the rapid tourism growth can precipitate important socio-cultural changes (Wearing & Neil, 2009). Belsky truly encourages local community to participate into conservation and ecotourism but he mentions that they will not do so unless communities benefit from tourism (as cited in Stronza & Gordillo, 2008). Ecotourism is certainly bringing many economic benefits but is also improving many different aspects of the communities’ livelihood. Garrod (2003) explains that by involving them in the ecotourism project, they will obtain bigger control over their resources and over the decisions concerning the use of such resources that affect the way they live. However, some negative aspects of ecotourism should be considered. Only few local communities, engaged in ecotourism or really clos e to tourism operations and preserved areas, have realized real benefits from it. Several tour operators have been unenthusiastic with the fact that they had to share the possible returns with local communities (Stronza & Gordillo, 2008). In the same way, Lima and d’Hauteserre (2011) stated that tour operators do not help the communities in the way they should. Also, even though ecotourism is generating new revenues, it is increasing the gap between the richer and the  poorer. Earnings are most of the time unequal and conflicts are emerging which are breaking the social cohesion of local communities. Information retrieved from different interviews, it appeared clearly that the profits received were not sufficient and could not support everyone (Stronza & Gordillo, 2008). Retrieved from other interviews with other communities, the same idea was shared concerning the fact that economic benefits could generate new conflicts within the community such as disputes between the members, misunderstanding concerning the revenues distribution and tasks allocations, which could then lead to a more important problem if people do not collaborate a right way (Lima & d’ Hauteserre, 2011) Some of the interviewees testified that ecotourism was not the solution to fix economic issues but agreed on the fact that it could bring more opportunities such as establishing a good network, developing new skills and better self-esteem (Stronza & Gordillo, 2008). Locals seem to become more aware of their own culture through the relation established between tourists and outsiders and this seems to increase the community self-esteem and beliefs (Lima & d’ Hauteserre, 2011). According to Jones, when local communities are completely involved in the ecotourism process, being directly engaged in decision making and working independently with management tasks, they become aware of the fact that new skills are required. Therefore, many people attended training sessions, sometimes organized by the government or associations. This helps them to face new realities and new habits (Lima & d’ Hauteserre, 2011). Also, ecotourism can have an impact on locals that are not directly working into the ecotourism sector. For instance, the presentation of handicrafts, folklore, tales and basically the presentation of their culture appear to reduce the inferiority feelings that some local people could feel. It also enhances their identity and they become more aware of their culture leading to a better self-esteem (Lima & d’ Hauteserre, 2011). Thus, even if ecotourism could appear is an ideal alternative type of tourism that will help to address economic and social issues toward local communities, some negative aspects should not be neglected. To make sure that the process is working perfectly, improvements need to be done. Also, local communities should not be exploited and should receive the benefits of their involvement (Wearing & Neil, 2009). Section 3: Comparison of methodologies In this section, methodologies of three different articles used in the previous literature will be analyzed and compared taking into account their strengths and weaknesses and more specifically their validity, reliability and truthfulness. The three articles that will be compared are: â€Å"Community views of ecotourism† by Stronza, â€Å"Ecotourism impacts in the Nicoya Peninsula, Costa Rica† by Almeyda, Broadbent, Wyman, and Durham, and â€Å"Community capitals and ecotourism for enhancing Amazonian forest livelihoods† by Lima and d’Hauteserre. All three articles are talking about the impact of ecotourism on local communities but they however differ by the method they used to obtain their information. To do a better comparison of the methodologies, the book â€Å"Research Methods For Business Students† wrteen by Saunders, Lewis and Thornhill was really useful. 1. Stronza, 2008 â€Å"Community views of ecotourism†. 2. Almeyda, Broadbent, Wyman, and Durham, 2010 â€Å"Ecotourism impacts in the Nicoya Peninsula, Costa Rica†- 3. Lima, d’Hauteserre, 2011, â€Å"Community capitals and ecotourism for enhancing Amazonian forest livelihoods† Method and approach used Quantitative and qualitative approach Use of secondary data In depth interviews with local households Semi structured interviews with community leaders Qualitative approach, use of primary data Deductive approach (but inductive at some points) In depth surveys, semi structured interviews Qualitative approach: use of secondary and primary data. Mix of inductive and deductive approach Structured participant Observations In-depth and semi-structured interviews Aim and objectives of the article Give an overview of what host communities think of the impact of ecotourism Determine the effects of the Punta Islita eco-lodge on the Nicoya Peninsula in Costa Rica. Investigate how ecotourism development enhances existing capital at community level. Location Amazone region: Bolivia Peru Ecuador Nicoya Peninsula in Costa Rica Brazil, Amazonia: Maripa Maguari Jamaraqua Time Period The study was done during six months in 2003 and consisted of three five-days workshop. 2008 (time of publication) 2010 Three months fieldwork 2012 Sample Purposive sampling 164 households (62 from Peru,67 from Bolivia, 35 from Ecuador,represented 45%, 55% and 7% of the communities population.) One community leader from each commnity Purposive sample with 63 households within 45 had at least a member employed in the lodge and 17 not employed by tourism industry but still receiving revenue from it Random sampling for employees in depth surveys 39 tourists filled out self-administered questionnaires 27 community inhabitants 42 local stakeholders (10 people from tour operators, 10 frim NGOs and 22 people from government environmental agencies Interview framework Semi-structured interview of 2-3 hours Open ended In depth interviews with households Semi structured interviews with community leaders and self administred questionnaire for hotel guests In depth questionnaire –based surveys Not specified with who they did each types of interview Limitations Benefits and indicators of success in each site were determined by emic, or subjective rather than etic. This research may reflect a situation that might change Sample size Sources Stronza, 2008 Almeyda, Broadbent, Wyman, and Durham, 2010 Coria and Calfucura, 2012 Table Comparison of methodology The first article written by Stronza, is giving an interesting approach as an overview of the topic is first given to describe ecotourism in general as well as the possible benefits it could bring to local communities. The author then relied on a study done 5 years before which had for goals to hear the community’s opinion which used in depth interview with local households and semi structured with community leaders during workshops. As the study was done in different countries which are Peru, Bolivia and Ecuador it allows readers to think at a big scale and it is probably more reliable than a study done only in one specific area. A possible weakness would be that, as in-depth interviews are used, even though interviewers have some key questions that they need to cover, their use will depend from one interview to another interview. Concerning the second article written by Almeyda, Broadbent, Wyman, and Durham, it is mostly based on the collection of primary data with surveys, interviews and questionnaires that they conducted themselves in one specific eco-lodge among guests, employees and locals. The weakness of this article, even if none seem to be mentioned in it, is probably the sample size and the fact that the semi structured interviews can lead to data quality issues. Indeed, as it may be hard to standardize the different kind of interviews, this may lead to reliability  problem. Also interviews are reflecting reality at the moment they were collected and therefore the results obtained from those interviews will not be automatically the same in similar interviews will be conducted in the future. In contrast with the first article, this one used mainly primary data whereas the first one used secondary data. Also, in this article, the study was undertaken only in Costa Rica, which was probably the purpose of the writers, but it narrows the research for someone reading the article. Writers could have undertaken their study to another country to compare both analysis. The last article consists of a collection of secondary and primary data, collected through structured participant observations as well as in-depth and semi-structured interviews. The strength of this article is that, as secondary data sources, it provides data that are easy to check. Also, it allows scholars or researchers to save time and effort by providing thoughts of several authors about one specific topic. However, when using secondary data, readers have to be careful that the sources cited in one literature review were not misunderstood by the one writing the literature review and that they are reliable and valid sources. Although, a possible weakness of the last article would be that the case study do not automatically reflect what is happening in other regions. As the study took place in Brazil, it is not a standardized model that could be applicable to another community everywhere else in the world. Also, their sample was really interesting as they interviewed people for NGOs, tour operators and governmental agencies as well as with local people. Thus once the information has been gathered it gave to the readers a better and generalized overview of the ecotourism impact of locals. Structured observations help also to do that but the main issues about it is the question of reliability as the observer must interpret something in a wrong way and therefore the observer should make sure he understood the setting very well before interpreting. Section 4: Selection of Methodology Out of the three articles cited in the above section, the one with the most appropriate methodology for the dissertation of an undergraduate student would be the first one. As previously analyzed, the methodology used in this article proposed first a sort of literature review which seems crucial to  have an overview of the topic and then series of results obtained through in depth interview with local households and semi structured interviews with community leaders. The most interesting thing is that it is representing three different countries which are Peru, Bolivia and Ecuador, that are still close to each other in South America but representing different cultures. By providing both qualitative and quantitative data, it gives the student a better understanding of the topic. However, the sample size was not always appropriate as it was not all the time representing the majority of the population. Special attention should be given to the size of the sampling to make the study reliable. Indeed, if the majority is not represented it can be considered as unreliable. The semi-structured and in-depth interviews are, for an undergraduate student, probably one of the best ways for a better understanding of the topic as they might adapt their questions from interview to interview. It will be really helpful to explore in depth the topic the student might be interested in. References Almeyda, A. M., Broadbent, E. N., Wyman, M. S., & Durham, W. H. (2010). Ecotourism impacts in the Nicoya Peninsula, Costa Rica. International Journal of Tourism Research, 12(6), 803–819. doi:10.1002/jtr.797 Coria, J., & Calfucura, E. (2012). Ecotourism and the development of indigenous communities: The good, the bad, and the ugly. Ecological Economics, 73, 47–55. doi:10.1016/j.ecolecon.2011.10.024 Garrod, B. (2003). Local Participation in the Planning and Management of Ecotourism: A Revised Model Approach. Journal of Ecotourism, 2(1), 33–53. doi:10.1080/14724040308668132 Jones, S. (2005). Community-Based Ecotourism. Annals of Tourism Research, 32(2), 303–324. doi:10.1016/j.annals.2004.06.007 Lima, I. B., & d’ Hauteserre, A.-M. (2011). Community capitals and ecotourism for enhancing Amazonian forest livelihoods. Anatolia, 22(2), 184–203. doi:10.1080/13032917.2011.597933 Scheyvens, R. (1999). Ecotourism and the empowerment of local communities. Tourism Management, 20(2), 245–249. doi:10.1016/S0261-5177(98)00069-7 Stronza, A., & Gordillo, J. (2008). Community views of ecotourism. Annals of Tourism Research, 35(2), 448–468. doi:10.1016/j.annals.2008.01.002 Yacob, M. R., Shuib, A., & Radam, A. (2008). How Much Does Ecotourism Development Contribute to Local  Communities†¯? An Empirical Study in a Small Island. The Icfai Journal of Environmental Economics, VI(2), 54–68. Wearing, S., & Neil, J. (2009). Ecotourism impacts, potentials and possibilities. (2nd ed., pp. 115-136). Oxford, England: Butterworth-Heinemann.

Sunday, September 15, 2019

Spirituality through community

In â€Å"Cathedral,† Raymond Carver wrote the story of an unnamed male narrator who describes a visit from Robert, a blind male friend of his wife. Roberts’ arrival and stay in the narrator’s home causes the narrator to abandon his stereotypes about blind people and to understand himself better. Carver, through his story, claims that in order to be free we must detach ourselves from stereotypes and focus on self understanding. Carver uses â€Å"Cathedral† as the title for his story in order to emphasize that the process of completing a cathedral is more important than the end result, which could take approximately one hundred years. In the process of drawing a cathedral with the blind man, the narrator, putting himself in Robert’s shoes, is enlightened while a meaningful relationship develops between the two men. The narrator goes through a process of transformation. In the beginning of the story, the narrator is very much against Robert’s visit. Jealousy and hatred seem to overcome him. His wife’s fondness for Robert and their close friendship that has spanned thousand of miles and ten years bothers him. Furthermore, the stereotypical image that he has built in his mind about blind men hinders him from welcoming Robert into his home and into his life. However, things change as the narrator and Robert begin on a quest to draw a cathedral. The end result is not the cathedral drawn but the feeling that overcomes the narrator after having embarked on the process. The narrator’s new found consciousness would not have come about if not for the process. By drawing, the narrator is able to experience different feelings that have been alien to him before. Even with eyes closed, the narrator still succeeds in producing the cathedral. This demonstrates that the value is not in the final product but in the journey that one undergoes to reach it. It is not the end product that heightens spirituality in an individual; it is the journey that allows a person to reach further. It is not the end product but the journey that allows the person to experience. Without the process, there will be no experience. Looking at someone else’s work is far different from producing the work. One appreciates the end product more if he realizes the work that goes into producing it. The story of â€Å"Cathedral† clearly demonstrates such. The narrator had difficulty describing the cathedrals that were shown on television. This was because he had little understanding and experience in cathedrals. As the narrator said, â€Å"I can’t tell you what a cathedral looks like. It just isn’t in me to do it. I can’t do any more than I’ve done.† His difficulty stems not from his inability to see the cathedral; it comes from his lack of experience and understanding of what a cathedral is and what it stands for. The narrator sees no value in cathedrals. He said, â€Å"The truth is, cathedrals don’t mean anything special to me. Nothing. Cathedrals. They’re something to look at on late-night TV. That’s all they are.† However, having embarked on the process of drawing a cathedral, the narrator is able to experience. He is able to build a new perspective on things. This goes to show that it is not the end result but the journey to it that really matters.