23 Jun 2010

Sample Essay: Globalizing the Cost of Capital and Capital Budgeting at AES

Before setting up business either international or local, there are some factors to put into consideration. Even if your business is doing well and expanding at a high rate one must put into consideration the risks that ascertain that particular business. In the case of the AES, the founders did not put much consideration into their expanding business to the overseas accounts. Their main undoing was the assumption of the risks involved as same as in the U.S as it were in the foreign countries. The AEs had its majority revenues linked to overseas operations with approximately one-third coming from South America alone. Since the company depended on these operations almost wholly, any changes involved as per this could have affected them greatly. And that’s why the company’s international exposure hurt AEs during the global economic downturn that began in late 2000.

In addition, they did not take into consideration that as a global company with operations in countries that are hugely different from the U.S they needed a more sophisticated way to think about risk and the cost of capital around the world. besides,, with AES’s international expansions, the model of capital budgeting was not supposed to be exported to projects overseas, since the same model became increasingly strained with the expansions in brazil and Argentina because hedging key exposures such as regulatory or currency risk was not feasible. In addition, the financial structure of a going-concern business like a utility was notably different than that of a limited-lifespan asset like a generating facility.

factors such as the devaluation of key south American currencies, especially during 2001, when a  political and economic crisis in Argentina brought about a significant devaluation of most south American currencies against the U.S. dollar, conspired to weaken cash flow at AEs subsidiaries and hinder the company’s ability to service subsidiary and parent-level debt. This was much evident in December, the same year, when the newly elected government abandoned the country’s fixed dollar-to-argentine-peso exchange rate (1:1) and converted us. dollar-denominated loans into pesos. This resulted to the peso losing 40% of its value against the U.S dollar. In addition, the currencies in Brazil and Venezuela followed suit, with the Brazilian real and the Venezuelan Bolivar each depreciating approximately 50% against the us. dollar during the same period. As a result, AEs recorded foreign currency transaction losses of $456 million in 2002, creating an influx of financial problems to the company. the subsequent impact of the apparent devaluation was increased when foreign businesses were paid in local currency but had obligations to repay debt denominated in U.S. Dollars.

besides this, the adverse changes in energy regulatory environments especially in brazil, when it had failed to produce a market structure sufficiently attractive to encourage domestic construction of new generation assets, the demand exceeded supply, causing shortages. this created the loss of sales volume when it started a power rationing program due to short rainfalls; the majority of brazil’s generation capacity is hydroelectric, triggering a regulatory conflict concerning the applicable exchange rate for the real-to-dollar energy-cost pass-through provisions in AES’s contract thus resulting to AEs taking a pretax impairment charge of approximately $756 million on the Eletropaulo, one of its major Brazilian businesses.

In addition to AEs woes, was the decline in energy commodity prices. As the earnings and cash distributions to the parent started to deteriorate, AEs stock collapsed and its market capitalization fell nearly 95% from $28 billion in December 2000 to $1.6 billion just two years later. The change in the regulatory regime in the U.K. also adversely impacted AEs by increasing competition and reducing prices in its generation markets. That, along with an unusually warm winter in the U.K., brought wholesale electricity prices down approximately 30%. These pressures caused several counterparties to default on their long-term purchase agreements. This counterpart risk, coupled with changes in the commodity markets, enhanced the financial pressure on AEs facilities, and those that could not sell electricity above their marginal costs were taken off-line or shut down.

To crown it all, overdependence of the company on foreign worlds especially on Competitive supply, Accounting for 21% of AES revenues, was to risky. This was because this depended heavily on changes in the price of electricity, natural gas, coal, oil and other raw materials, weather conditions, competition, changes in market regulations, interest rate and foreign exchange fluctuations, and availability and price of emissions credits. Any chances to this involved price volatility which indented several businesses including the Drax plant in the U.K., the largest plant in AES’s competitive supply fleet. Financially speaking, the company should have reduced its overdependence or did its risk assessment to that effect.

At first the employment of a 12% discount rate was used for all projects. In a world of domestic contract-generation projects where most risks could be hedged and businesses had similar capital structures, that’s why it worked well initially since, capital budgeting at AES was fairly straightforward. When AES undertook primarily domestic contract generation projects where the risk of changes to input and output prices was now increased the situation changed. But the company still held on the economics of a given project was evaluated at an equity discount rate for the dividends from the project, all dividend flows were considered equally risky, and a 12% discount rate was used for all projects. This was particularly risky since there were other factors that were different in other projects than others. The model became increasingly strained with the expansions in Brazil and Argentina because hedging key exposures such as regulatory or currency risk was not feasible. In addition, the financial structure of a going-concern business like a utility is notably different than that of a limited-lifespan asset like a generating facility. Nonetheless, in the absence of an academic or other alternative, the basic methodology remained intact. Besides, the ever-increasing complexity in the financing of international operations also contributed to this. From the exhibit 6, this was evident in that, subsidiary A and B were financed with debt that was nonrecourse to the parent. The subsidiaries’ creditors had claims on the hard assets at the power plants but not on any other AES affiliate or subsidiary. The local holding company, which often represented multiple subsidiaries, also borrowed to finance construction or acquisitions and received equity in the various subsidiaries it held. In addition, the holding company had debt that was nonrecourse to the parent, secured by dividends from the operating company. Finally, AES borrowed once again at the parent level in order to contribute equity dollars into holding companies and subsidiary projects. At the end of 2002, AES had $5.8 billion in parent company (recourse) debt and $14.2 billion in nonrecourse debt. Using this subsidiary structure, the parent company received cash flows in the form of dividends from each subsidiary (some of which were holding companies) and, because the structure of every investment opportunity was essentially the same, all dividend flows were evaluated at the same 12%discount rate. This had the benefit of making similar projects seemingly comparable. However, when subsidiaries’ local currency real exchange rates depreciated, leverage at the subsidiary and holding company level effectively increased, and the subsidiaries struggled to service their foreign currency debt. Imagine a real devaluation of 50%. That cuts EBITDA in dollar terms by 50% and coverage ratios deteriorate by more than 50%. The local holding company cannot service its borrowing, and dividends to the parent are slashed. Ultimately the consolidated leverage was well over 80% without any hedging of foreign exchange for any meaningful duration; this is where the model broke down. A calculation of a firm’s cost of capital in which each category of capital is proportionately weighted. All capital sources – common stock, preferred stock, bonds and any other long-term debt – are included in a WACC calculation. All else help equal, the WACC of a firm increases as the beta and rate of return on equity increases, as an increase in WACC notes a decrease in valuation and a higher risk. Venerus’s solution to the problem had to be consistent, transparent, and accessible. He knew his solution would have to account for changes in required returns due to leverage, incorporate some understanding of a project’s risk profile, potentially include country risks, and still provide values that were consistent with market behavior, including trading multiples. Broadly speaking, a company’s assets are financed by either debt or equity. WACC is the average of the costs of these sources of financing, each of which is weighted by its respective use in the given situation. By taking a weighted average, we can see how much interest the company has to pay for every dollar it finances.

A firm’s WACC is the overall required return on the firm as a whole and, as such, it is often used internally by company directors to determine the economic feasibility of expansionary opportunities and mergers. It is the appropriate discount rate to use for cash flows with risk that is similar to that of the overall firm. To overhaul the capital budgeting process and evaluate each investment as a distinct opportunity with unique risks

WACC

According to the new approach,

WACC = E V r e + DV rd (1-τ)

Where:
Re = cost of equity
Rd = cost of debt
E = market value of the firm’s equity
D = market value of the firm’s debt
V = E + D
E/V = percentage of financing that is equity
D/V = percentage of financing that is debt
Tc = corporate tax rate

13 May 2010

Sample Essay: PP Recruitment Agency

Introduction

When John and Janet started their company, PP Recruitment Agency, they had prepared a strategy to enhance their company’s image. Because the company was started as a small office, they were able to spend a lot of time with their clients and customers; clients being the companies which required their service and customers were candidates who approached them for jobs. This strategy was extremely successful as the company could get an idea of what their client wanted, and also, understand their customer’s background and strength. This helped the team of John and Janet immensely and the company began to show growth. What was also working in their favour was that because of their closed-unit approach, their success spread around through word of mouth. Their professionalism increased their reliability and approach, which led to more number of candidates and clients approaching PP for services.

As business expanded, John and Janet expanded PP Recruitment Agencies operations to Manchester, Leeds and Nottingham. With their expansion, John and Janet found it difficult to offer the same kind of personalised service to all their clients. With offices in Manchester, Leeds and Nottingham, PP had to recruit more staff to handle their expanding operations, and this hampered their personal services. After five years of trading where revenue and annual profits consistently increased, year on year, profits declined as did revenue. The number of clients and candidates decreased and it became difficult to monitor the operations of each branch to make an assessment. While each branch had a branch manager; a team of five or six dedicated consultants who dealt with clients and candidates directly, and at least one administrative assistant to handle all correspondence with clients and candidates and maintain branch records, the Sheffield Branch had, apart from Janet Doe and John Smart, their personal assistant May Help and an accountant, Fred Beancounter plus his team. Without a proper information system in place, PP operations began to go haywire, as a result of which, details of clients and candidates were held up at branch levels. Without a proper link between branches and HO, searching for suitable candidates across branch offices became difficult, if not impossible.
While Fred tried to implement a system to retrieve information from branches using spreadsheet, it was messy and time-consuming. Because PP had no centralised information system in place:

Collating and analysing data was difficult

They couldn’t answer client’s queries on market trends

Analysis

What John and Janet didn’t expect was for their personalised service to be affected due to their expansion. The problem with such an operation is that unless you are present for every meeting; be it with client or with candidates, it becomes difficult to assess the client’s requirement or the candidate’s strength. Both John and Janet were able to do this exceptionally well when they started, but as the organisation got bigger, they failed to make the required changes. The company didn’t have a proper Information System in place to handle the wider database created by the increased number of branches, and they couldn’t host their client’s requirements on a system which could be accessed by all the branches and HO, resulting in delayed transmission of information and poor service.

While recession could have been a cause for the decrease in business, it was because of the inefficiency by PP to meet their client’s needs in time which cost them dearly.

Another issue with PP was that the company did not meet the requirements of training their personnel to meet the demands of the market. As long as John and Janet were running the show, they were able to entertain all their clients and candidates evenly, however, when they expanded and recruited new staff for the branches, they failed to train their staff in handling clients and candidates, as a result of which, quality suffered.

Therefore, it was these two factors; an improper Information System, and non-trained personnel, which led to the fall of PP operations.

Literature Review

Information Systems are the software and hardware that support data-intensive applications. Many scholars believe that the business world is moving from an industrial economy to an information economy. “The convergence of computing, telecommunications, and software is not only enabling new forms of competition, but the digital convergence of various states of information; data, text, voice, graphics, audio, and video, is also spawning new business opportunities and new ways of communicating.” While not many would argue that all spheres of business and economic activity today harbors on information technology, it would be unimaginable to live in this world of business without information. Business managers spent much of their time on information processing, and do so through technological tools such as “executive information systems, groupware, video-conferencing, and so on (Currie, 1999).

Computers are multi-dimensional operating systems which have immense usage. It could be used as a calculator, a notepad, it could be used for drawing graphical representations, or it could be used for generating account statements and so on. Apart from these, the commercial aspect of a computer is that it can help people take decisions using data which is available to them. Decision-making couldn’t have been easier if not for computers. Decision making process should be focused to avoid unnecessary time wastage. Computer applications such as MacProject and MultiPERT enable decision makers to plan out the projects while experimenting with “what if” scenarios and monitoring the actual effects of implemented decision.

No product or company can survive competition or sustain its identity without asserting itself on two basic components in marketing: Image, and people. If not managed properly, these components can break a product, or in other words, a brand. Brands and people have to be owned, nurtured and developed by an organization. They are the ultimate differentiators and value creators. Companies such as Pepsi, Coca cola, Levis, and Cadburys are a few examples of well managed brand companies, whereas Enron and Anderson are the adversaries. So powerful is this medium, that unless harnessed properly, sustainability, popularity, and growth are at risk.

Thus, the elements that affect an individual’s relationship with a brand are:

The relationship between the product and the customer

The type of person the brand represents. The consumer obviously would like the personality traits to be that of his own.

A customer’s relationship with an organisation is built on trust and reliability. A Customer Relationship Strategy fundamentally reshapes the thought process of an organisation to build a strong customer base. Focusing on people, business processes, performance management systems and technologies are sure ways to satisfying customer needs.

With an effective Customer Relationship Strategy, an organization can:

Identify, acquire, retain and develop more profitable customers

Align the business, marketing, and sales strategies with customer care

Achieve a customer centric organization with a clear contact management strategy (Hitachi, 1994).

Customer Relationship Management

This came about because of the change in consumer buying behaviour. The consumer had become knowledgeable and demanding, and this forced companies to shift their priorities to accommodate these. This gave rise to the growing concentration on customer relationships; customer retention became the core focus of corporate and other companies in their marketing activities. This is founded on the belief that customer retention leads to economic performance in terms of both, turnover and costs, said Anderson et al., (2000, p.869-882).

This is where relationship marketing assumes significance.

‘Relationship marketing covers all actions for the analysis, planning, realization   and control of measures which initiate, stabilise, intensify and reactivate business relationships with a company’s stakeholders, namely its customers, and in the creation of a mutual value.’

Customer relationships is critical in the success of a business, and its quality is dependent on the kind of relationship that company maintains with their stakeholders (Bruhn, 2003, p.2-11).

E-Business has revolutionised commerce and trade. Gone are the days when business transactions were done manually, for today, in a highly charged up competitive market, it is important for businesses to have information at their finger tips to cater to customer demands. It is information technology that drives today’s business. For fashion retailers, maintaining cordial relations and offering customer-oriented service is what will help it stand in the forefront of competition. Through the introduction of Customer Relationship Management (CRM), fashion retailers can draw the advantage of customer expectations, restructuring its corporate and communication network and implement strategies that offbeat competition. Strategy is vital to the success of any organisation, and it’s no different with the fashion industry. Strategic management is the organizing of products, services, processes and systems to meet customer needs. Strategy process addresses:

The perceived strengths and weaknesses of the current practices

A customer’s behavior (likes and dislikes)

The identification and implementation of technology that can overcome the present inefficiencies

It is no secret that today’s business is based on customer satisfaction and relationship. Thus whatever be the strategy, it must be customer-centric (Pettinger, 2004, p.215-217). It is in this context that many organisations today consider the introduction of Customer Relationship Management (CRM). This technology will not only enhance customer service, but most importantly, improve customer relationship, so vital for an organisation’s success.

Case Study: The Great British Outdoor Market

In 60 years since the Second World War, the UK outdoor clothing and sports gear industry has witnessed phenomenal growth, thanks to the intrinsic relationship between industry bosses and sports personnel. The growth brought to eminence brands like Karrimor and Berghaus, now internationally known for their durability, weather proof comfort and style. Changes were mandated by the post-war economic growth which resulted in abundant leisure time, better communication networks, easier product access and a renewed focus on marketing activities. New product development in the materials sector, including nylon and other plastics-based products, set the tone for innovation to meet the growing need of sportspeople.

Initially, the industry bosses, being enthusiasts in sports, developed products suitable for sports through their extensive network of personal friends and family members. The entrepreneurs’ innovation was based on their personal experience and conducted through social contact, leading to profitable gains. Also, since the products were developed through their extensive network of social contacts, the products were produced with great care and craftsmanship, reflecting a deep understanding of the needs of the end-users. It also resulted in commercial benefits, as the sold products could be photographed on the sportsperson and used to promote the products to a wider audience, through advertising and catalogues.

Gradually, the entrepreneurs began to extend their network beyond their immediate circle of friends and family members to generate information and expertise which could be used to expand their customer base. The new initiative instilled contractual agreements and strategic alliances, such as those between Karrimor and BM Coatings (rucksacks) and Berghaus and Gore-Tex (clothing). This marriage between the companies resulted in sharing of knowledge and innovation apart from a profitable distribution channel and licensing arrangements. Retailers assumed significance for collection of data about products between customers and manufacturers. And suddenly, customers became the source of inspiration for further product development. The wider the network, the more radical became the innovation, leading to a higher market growth (Harwood et al., 2006, p. 105-111).

Customer Relationship Management

Customer Relationship Management (CRM) is an important tool in customer-centric strategies today (O’Brien, 2003, p.182). It not only enhances the quality of service, but also adds more strength to compete in the highly competitive service industry. According to Dong (2007), CRM systems are enterprise applications that support and integrate customer-oriented business processes such as marketing, sales, and customer services to manage business interactions with customers. This can be done by analysing and distributing a customer’s personal details and needs to the concerned departments within a retail fashion business for personalised services. CRM can also help initiate guest preferences, by which customers feel elated of being recognised without a formal introduction. Their personal preferences, billing details, method of payment, and dressing habits can be optimised to create an atmosphere of showing how important that individual is to the retail business. This will elicit a warm response from the customer who could recommend or vouch the retail business house to his/her friends, increasing the volume of sales of the fashion business.

Siebel Systems, Oracle, PeopleSoft, and SAP are leading CRM software vendors, who offer solutions that integrate and automate customer processes to allow fast, convenient, reliable, and consistent services to their customers.

The latest from SAP CRM, delivers customer-inspired innovations and state-of-the-art, Web-based user interfaces that enables the organisation to delight its customers, empower their support team, and improve their business.

SAP solutions and applications support a wide range of business processes related to Customer Relationship Management applications, including:

Category management

Multi-channel retailing

Price management

Price optimisation

Sales order management (SAP.com)

The following block diagram No.1, gives a fair idea of how a CRM system design would look like.

Block Diagram Courtesy: www.crm-strategy.net/crmpres.htm

While many companies have realised that the traditional approach of introducing CRM is purely as a tool for front-office operations, many have gone a step further to include not just data from front-office, but also enterprise-wide information about customers’ buying habits and profitability. Moreover, fast and flexible deployment to focus on strategic business priorities is critical for any CRM implementation. CRM provides a platform for business agility, enabling a company to adapt its processes easily to constantly changing business dynamics, manipulate business processes across and beyond the enterprise, and transform itself into a customer-centric enterprise.

In a nutshell, CRM will:

Evolve as business grows with support for end-to-end business processes

Empower Users with Role-Relevant Customer Insights

Gain Immediate Value

Increase Customer Satisfaction

Reduce Cost of Ownership (mySAP, 2005).

Many organisations have been known to strategise based on their requirement and interest. This led to poor customer response and sales. It was then that many began to focus their strategies on customer needs. For, from a customer’s perspective, if a product or service was variable in quality and/or complex, and if that product was intangible in nature, the resultant outcome led to risk and uncertainty. Customers would stay away from making any attempt to test these grounds. Risk reduction and reduced uncertainty are factors which customers find particularly important within the context of some markets says Morgan and Hunt (1994).

The more a customer builds a relationship with an organisation, the more knowledgeable the organisation becomes about a customer’s requirements and needs. Thus, having a long-term, ongoing, stable relationship with a particular fashion retailing house will obviously reduce uncertainty and risk and enhance their relationship. The relationship becomes so obvious, that even in cases where customers are aware of some competitors who offer the same or better service, they will choose to stay in the relationship due to its predictability and comfort (Harwood et al., 2006, p.107-111).

Training

For a workforce to perform at the highest level, employees need training on a regular basis. The importance of training the workforce has become such an important part of an organisation’s success that many companies with some form of training programmes spent a part of their median of labour budget on training, and that those with highly effective programmes were spending much more of their median of labour budget on training programmes. Training and development of personnel is a continuous process and organisations which ignored this sphere of HR activity, were most likely to succumb to competition. In a highly charged-up competitive world, success comes with innovation and technology. Organisations continue to introduce new manufacturing processes, information technologies, and equipment to stay competitive, and this prerogative mandates the introduction of training programmes to acclamatise and master production strategies. This can come about only through continuous training of staff. Also getting the staff to work on the different machinery and tools will only enhance production prospects and decrease monotony among the workforce, and in the process increase retention prospects. Despite changes, many organisations still fail to come to terms with the fact that well-funded employee development programmes are most important in the success of their business.

Training employees will not only lead to a healthy work environment, but it will also enhance productivity and lead to all-round development of employees and organisation.

Providing support and resources for continuous performance improvement is a vital part of the HR functionality (Weston, 1995).

Working in collaboration with line managers and other employees, HR specialists can help identify new skills or competences that are required to train and develop the workforce in enhancing their performance (Weston, 1995).

References

Anderson Eugene W. and Fornell Claes, 2000, Total Quality Management & Business Excellence: Quality Assurance & Total Quality Management, Volume 11, Issue.7 September 2000, pages 869 – 882

Bruhn, Michael, (2003), Conceptualization and empirical results of a pilot study in

Switzerland, European Journal of Marketing, Vol. 37 No. 9, 2003, p.1187-1024

Currie, Wendy, 2000, The Global Information Society, Market as Opportunity: Developing Business-to-Business Internet Commerce, John Wiley & Sons, West Sussex, UK

Harwood, Tracy G and Garry, Tony, 2008, Relationship marketing: why bother?

Journal: Handbook of Business Strategy, Volume: 7, Issue: 1, p.105-111, ISSN: 1077-5730, DOI: 10.1108/10775730610618701, Emerald Group Publishing Limited

Hitachi Consulting, Customer Relationship Strategy, 1994, http://www.hitachiconsulting.com/page.cfm?id=customerrelationshipstrategy

Morgan R. M, and Hunt S. D, 1994, The Commitment-Trust Theory of Relationship Marketing, Journal of Marketing, 58, p.15-40

mySAP, Customer Relationship Management: Solution Overview, 2005, http://www.sap.com/solutions/business-suite/crm/pdf/BWP_mySAP_CRM_Solution_Overview.pdf

O’Brien J, 2003, Management Information Systems (6th Ed) The McGraw-Hill Companies, p.175-190

Pettinger, Richard, 2004, Contemporary Strategic Management, Palgrave Macmillan, ISN 1-4039-1327-7

Weston, F.C, 1995, What Do Managers Really Think of the ISO 9000 Registration Process? Quality Progress, Volume. 28, Issue.10, p.67-73′, in Christine Avery and Diane Zabel’s, 1997, The Quality Management Sourcebook: An International Guide to Materials and Resources, Routledge, London, p.178

17 Oct 2009

Sample Essay: Network Measurement: Effectiveness and Efficiency

Introduction

Effectiveness and efficiency can at the same time or simultaneously perform at high standards in any given business organization. While effectiveness refers to the levels at which any organization realizes its goals through managing to accomplish set tasks, efficiency is highly related to the magnitude of resources utilized to achieve these goals of the organization. (Daft and Lane, 2008)

This is a report set to answer the question, “Which are the various metrics used to measure the effectiveness and efficiency of a network in pursuit of organizational goals?” The report will answer in a detailed manner this aforementioned question.

There are three approaches towards the metrics used in a network analysis. These are: PERT, CPM and Event Chain methodologies.

PERT

PERT is in full known as Program Evaluation and Review Technique and is used in the evaluation of the duration of a program’s schedule. It puts in account the fluctuations associated with every activity in a project. The method usually applies various statistical measurements to accommodate each activity’s fluctuations.

Before these measurements are tackled, it is of essentiality to show the procedure used under PERT. This is as follows:-

Identification of particular activities as well as their milestones

Determination of the activity-sequence

Net-work diagram construction

Estimation of the durations per every activity

Critical path determination

Updating of the PERT depending upon the progress of the program / project.

In PERT, the requirement is that each activity should be provided with three kinds of estimates. These are: Optimistic, Pessimistic and most likely times.

Under the PERT procedure, the optimistic time denotes the shortest duration an activity can take to complete. Commonly, to determine the Optimistic time three standard deviations are utilized (that’s, from the expected time) to reduce the level of error in calculating the times.

Pessimistic time on the other hand is the longest duration an activity may take to finish. Also, here three standard deviations are applied from the expected time.

The Most likely time is the highest probability that an activity will be through in a given set time. However, this is not the same as the expected time of completion.

Therefore, using the three types of estimates, expected time can be computed. This is because PERT, during the time estimation, takes an assumption of beta probability. Thus, the formula below indicates the weighted average to calculate the expected time duration. Take Te to denote expected time.

Te = Optimistic time + (4 X Most likely time) + Pessimistic time 6

Using a hypothetical situation with activities A, B, C, D, E and F:

Table 1 can be used to illustrate PERT and the calculation of the expected time using the formula indicated before.

PERT

Activity         Optimistic      Pessimistic     most likely      expected time

A                         5                        15             10                    10

B                          2.5                     7.5            5                      5

C                          1.5                     3.5            2.5                   2.5

D                        5                        10              7.5                   7.5

E                         2                         4               2.25                 2.5

F                        1                         5                6                      5

Note that it’s just by coincidence that some of the most likely times are equal to expected times.

Critical Path Method

An organization’s efficiency and effectiveness can be enhanced by utilizing CPM (critical Path Method) as well. CPM is used to show the critical path in a diagram of networks for the purposes of any kind of an activity’s process of planning. The CPM shows these activities using a diagram of networks.

In the process of risk avoidance and ensuring efficiency and effectiveness in a program management, it identifies activities dependent on each other. An example is where a given activity cannot be initiated before completing the other. Every of the activities in the network diagram has an estimated duration. Also some activities can take place at the same time.

CPM works on the basic idea that if one activity is delayed along the critical path, the whole project would be delayed. A CPM schedule takes that of a PERT but for the last step. (Baker, et al, 2003)

Using the previous example, a Critical path can be determined and this is after a Network diagram is presented (That’s, diagram 1).

Table 2.

Activity duration (wks) Preceding Activity

A                    10                    –

B                     5                      –

C                     2.5                  A

D                    7.5                  A

E                     2.5                  C, B

F                     5                     E, D
C = 2.5

A=10                                    C=2.5

0                                                               12.5                 E=2.5

B=50

0
Critical Path

A-D-F =this is the critical path since it has the longest duration

Thus, the project takes 22.5 weeks

The arrows in the diagram show the activities, and the Circles show events of completion or commencing of an activity. The times indicated above the events show earliest start times while the ones below show the latest finish times. Latest start times are calculated from the earliest activity while the latest finish times are calculated from the last activity.

Event Chain Methodology

Event Chain is a methodology set to solve the Predicament of uncertainty in various time-based processes of a business. This method is basically dependent on six main principles. This are:-

An activity in the real life setting may not be a continuous and uniform undertaking. It usually involves some uncertainties and thus the question of risk arises. Events can either reduce or increase a project’s time schedule.

There can be a failure of a project due to the situation where a given event causing other uncatered for events. This is where the method borrows its name since these other events created are called event chains.

Subsequent to the definitions of events as well as event chains, a quantitative analysis can be performed by the aid of Monte Carlo simulation so as to tell the uncertainties and the quantification of the events’ impact. To prevent a double count as regards the factors playing part in the distribution and events’ results it’s essential to discriminate between the two.

One can prevent the negative effects of event chains by identifying the ‘critical chain of Events’. These are the events that are highly probable to impact on the project negatively.

Using historical data, the effect and the probability of events is measured. By monitoring the progress of the activities it helps in utilization of an up-to -date information in analyzing the project.

Lastly, it’s easy to analyze the uncertainties and risks in a project modeling. This is simplified by the observation of event chain diagrams to see the tasks and events’ relationship and how events affect other. (Inter Institute, 2006)

Conclusion

Thus, the conclusion is that to answer to the question,” Which are the various metrics used to measure the effectiveness and efficiency of a network in pursuit of organizational goals?”  PERT, CPM, and Event chain Methodologies are used. Also it’s of essence to note that Event Chain methodology is the most recent of the three methods used in a project analysis.

References:

Baker, S. et al. (2003). The Complete Idiot’s Guide to Project Management. Alpha Books.

Daft and Lane, P.G. (2008) Aise Management Cengage Learning EMEA.

IntaverInstitute. (2006) Project Decision and Risk Analysis Journal: Overview of Event Chain Methodology Retrieved February 20, 2009    http://www.intaver.com/Articles/RP_Art_EventChainMethodology2.html

NetMBA.com, (2007) PERT Retrieved February 20, 2009  http://www.netmba.com/operations/project/pert/

Filed under: Sample essays — Tags: , , , , , , — admin @ 11:51 am

06 Oct 2009

Sample Essay: Outcome Evaluation of a Public or Non-Profit Organization

1. Introduction

Public sector across levels of governance is under intense pressure to provide better services with the least resources possible. It is essential to optimize the usefulness of available resources in order to achieve the best possible results. Non-profits likewise need to utilize the available tools and human resources to the maximum, since they often have to depend on external unreliable sources of funding and hence must guard against the possibility of shortage of capital and other resources.

Policy analysts and public managers, therefore, are also under the scanner for their ability to harness the resources available to the maximum. They ought to be able to decide as to which programs should be implemented and given priority over other programs with respect to the allocation of funds, time, effort and other resources. Hence, they need to be able to prioritize what services will be offered and evaluate whether the programs designed to provide such services have been effective and efficient.

The most commonly used interdisciplinary approaches and methods for evaluating policy impacts and program outcomes are cost-benefit analysis, randomized field experiments, quasi-experimental assessment, and participatory assessment. Thus we have a host of designs to choose from for the study.

The quasi-experimental design is of many types which includes the following:

1. One-Group Posttest-Only Design:

It is extremely simple and rarely found being used in social science research.

2. One-Group Posttest-Only Design with Multiple Substantive Posttests:

It allows pattern matching, as well as multiple, unique, and substantive posttests

3. Pretest-Posttest One-Group Design:

It can be implemented either the same units or different units receiving both pretest and posttest. It however suffers from single group threat.

4. Nonequivalent-Groups Design (NEGD):

Since the distribution is not random, there is a risk of selection bias. It is a method that can handle the discrepancy of internal group threats well. It incorporates the concept of a control group to eliminate internal validity problems.

2. Methodology

A non-equivalent groups design (NEGD) was followed, based on a pre-test and post-test of both an experimental and a control group, as suggested by Trochim (2002). Only the experimental group was exposed to the intervention. The control group was included in the study to control the possible history-effect as a threat to the internal validity of the study. The non-equivalent groups design is a quasi-experimental design and is widely used in social research. It differs from a pure experimental design in that the groups are not randomly assigned (Trochim 2002).

Sampling

In this study thirty-six people were included in the experimental group. They were chosen because they had already enrolled for the six-month training intervention that was to be evaluated. The participants were all between the ages of 18 and 25. Fourteen participants were female, and twenty-two were male. The experimental group members were not engaged in any formal secondary or tertiary studies at the time, although all of them had completed secondary school and some of them already had tertiary qualifications.

In selecting the control group, care had to be taken to select a group that would provide an adequate control. In order to ensure that the two groups could be validly compared in terms of previous exposure as well as motivation levels, the main criterion was that the control group, like the experimental group, should be in the same intermediary phase between school and work, and should also be actively involved in some way or another in preparing themselves for the workplace. Another criterion was that the control group should not have been exposed to the training intervention or elements thereof. For the reasons mentioned above, the control group comprised twenty second-year commerce students from the University of Pretoria, ranging in age from 20 to 22. Ten members were male and ten were female. They were all in the intermediary phase between school and work and they were all actively involved in preparing themselves for the workplace. The fact that they had already successfully completed their first year of studies, coupled with the fact that they (out of a class of more than fifty) decided to take part in the study, is a further positive indicator pertaining to motivation levels. During the six-month period that the experimental group participated in the training programme under review, the control group continued with their daily class routine. In terms of the control group not being exposed to the social constructivist training programme or elements thereof, it is significant that second-year students were chosen: the undergraduate commerce programme at the University of Pretoria is not at all structured in a social constructivist manner. Classes are very big (fifty plus); students do not form learning communities in small groups but attend their respective programmes individually; the programme is divided into separate subjects with a strong focus on conveying content and assessment is done according to traditional examination methods. Thus, although one can never create a true experimental situation where members of a control group are not exposed to any form of social constructivist learning whatsoever in their academic or personal lives, the research team felt confident that the control group experience was significantly removed from true social constructivism to provide an adequate control for the purposes of the study.

The Training Intervention

The training intervention was designed and presented by a private educational initiative consisting of team members from various fields such as psychology, education, theology and business. The intervention was a six-month full-time (8:00 -16:00) programme and was funded by private investors, as well as by the registration fees from the participants themselves. The programme aimed to develop individuals to be successful in the workplace of the future. The primary goal of the programme was to develop certain characteristics that could form the foundation for developing the participants’ ability to function in the workplace of the future. Although the characteristics they would require were seldom explicitly focused on, the programme design and training methodology aimed to develop these characteristics as an indirect consequence of the day-to-day training interventions. Some of the themes covered by the training interventions were life skills and personal mastery, entrepre-neurship and business, making sense out of history, news and politics, economics and statistical reasoning and environmental awareness.

The training methodology that was followed could be broadly divided into two main categories, namely facilitated group interventions and facilitated one-on-one interventions. While the group interventions allowed for the participants to explore and master the desired learning objectives collaboratively, the one-on-one sessions took place between a learner and a learning partner (coach) and allowed the learner to reflect on her/his progress in relation to her/his personal future goals and objectives. The learning partners and group facilitators were all experienced HRD practitioners qualified in the fields of psychology and education.

The group sessions made extensive use of activities that engaged the learners in a process of discovery and group reflection. Some of these activities were simulated, while others were real life experiences. Many activities were also designed by the learners themselves, which created considerable buy-in from learners into the learning process.

One example of a simulated set of activities that was used is a high ropes adventure experience. These activities took the learners out of their comfort zones and stimulated discussions around group dynamics, leadership, interpersonal relationships, uncertainty, change, personal goal-setting and risk. One example of a real life activity that was used was the ‘R50 business’. Each student received R50 on a Thursday morning and on the following Monday had to report on the profit she/he had made. Principles of entrepreneurship and business opportunities were derived from this activity. The subject matter was thus experienced by the learners as an integrated whole, situated within the context in which it would be used again in future.

Facilitators steered the process and created an environment in which the learners could arrive at their own conclusions. The learners also came from diverse cultural and academic backgrounds, which extensively influenced discussions and conclusions.

Assessment was an ongoing dialogical process between peers, the learning-partner (coach) and the learner, the facilitator and the learner and self-assessment. Except for the psychometric assessment that was conducted as part of this study, assessment of progress never took place by means of a test or exam and was continuously used to set new development targets and to celebrate targets that had been reached. Table 1 compares the practical implementation of the training intervention with the principles of social constructivism.

The Research Process

Based on the literature review, a number of specific characteristics were identified as necessary for success in the workplace of the future. A battery of psychometric instruments was compiled to measure these characteristics. The pre-test for the experimental group was done on the first day of the training programme and the post-test on the last day. There was thus a six-month period between the pre-test and the post-test for the experimental group. The control group also had a period of six months between the pre- and post-tests, but were not exposed to the training intervention. The control group’s tests were administered during class time, as they all continued with their day-to-day university classes over the six-month period. The testing conditions for both groups were very similar, as both groups were tested under controlled classroom conditions.

Measurement Instruments

The psychometric battery that was used as the assessment instrument consisted of sixteen tests from the Situation Specific Evaluation Expert (SpEEx) and Potential Index Batteries (PIB) series. The individual tests that were included in the psychometric battery have satisfactory, well-researched reliability (ranging between 0.58 and 0.92) and validity (ranging between 0.70 and 0.94) statistical records (Schaap 2004). The tests measured the following constructs: creativity, stress tolerance, type A/B behaviour, frustration tolerance, self-acceptance, adaptability, internal/external actualisation, conformity/non-conformity and the demonstrative, Samaritan (behaviour), persevering and evaluative social styles. Table 2 indicates how the constructs that were measured are linked to the characteristics that the training programme focused on.

3. Results

Statistical Analysis

The Mann-Whitney U-test was used to determine the significance of the differences between the variation that took place in the experimental group and in the control group over the six-month period between the pre- and post-tests. In this test, each treated entity is compared to each control entity. This test is used when any treated entity can be validly compared to any control entity. The interpretation of the results of the Mann-Whitney U-test is similar to that of the normal except that the U-test uses the sum of rank orders rather than the statistical mean. The statistical decision criterion was set on the level of Where results were statistically significant, the practical significance (meaningfulness) and effect sizes were also calculated. The reporting of effect sizes is encouraged by the American Psychological Association in their Publication Manual (APA, 1994). For differences between averages the meaningfulness level was set at d = 0.5 (medium effect size) (Cohen, 1988). The meaningfulness (d) pertaining to the comparison of the experimental and the control groups is represented as follows (Steyn 1999):

Where:

d = Meaningfulness

= The mean of the post-measurement of the experimental group

= The mean of the pre-measurement of the experimental group

= The maximum standard deviation between the pre- and post-measurement of the experimental group.

For results with medium and large effect sizes, the Bonferroni adjustment to alpha levels was made (Hsu, 1996).

The direction of the change in the experimental group was positive in the cases of creativity, adaptability and self-acceptance and negative in the case of the evaluative social style. The differences on the adaptability (large effect size), self-acceptance (medium effect size) and creativity (medium effect size) scales are also practically meaningful according to Cohen’s test for meaningfulness (practical significance). If the Bonferroni adjustment to alpha levels is made to rule out coincidental differences, only the difference on the adaptability scale remains significant. However, when doing the Bonferroni adjustment, care should be taken not to discard results that could in fact be valid (Hsu, 1996).

4. Discussion

If the Bonferroni adjustment is applied the alternative hypothesis can be fully accepted only for the adaptability of the experimental group in comparison to that of the control group. Still, given the fact that the creativity and self-acceptance differences are also meaningful (practically significant) according to Cohen’s formula, and given the danger of discarding valid results through the use of Bonferroni’s adjustment, the alternative hypothesis for the dimensions of creativity and self-acceptance is (at least) preliminarily accepted. Participants in the programme were thus significantly more capable of adapting to a continuously changing environment than they were before participating in the training programme. Participants were also able to display relatively higher levels of creativity and creative approaches to problem-solving than they had been prior to the intervention. The levels of confidence and acceptance of self were also relatively increased. The attributes of creativity (Grulke, 2001; Brownstein, 2001; Cetron, 1999), adaptability (Boyatzis, 1999; Herman and Gioia, 1998) and self-acceptance (Branden, 1997) are all regarded as essential for succeeding in the workplace of the future.

The demonstrative and evaluative social styles of the participants also underwent a statistically significant (though not a practically meaningful) change. This could mean that participants were relatively more willing to express themselves assertively and to take part in group discussions than they were before the intervention. They were also relatively less evaluative in their social approach, meaning that they would take decisions faster and with less worrying about having every bit of information before taking action. Being willing and able to express oneself, take risks, get involved and create action are also regarded as essential attributes for succeeding in the workplace of the future (Ridderstrale and Nordstrom, 2004; Boyatzis, 1999; Epstein, 1998; Branden, 1997).

However, according to the results, the training programme did not have a statistically significant impact on constructs such as stress and frustration tolerance, type A/B behaviour, internal/external locus of control, conformity/non-conformity and the social styles called ‘perseverance’ and ‘Samaritan’. For these dimensions, the zero-hypothesis is accepted. The training intervention was thus partially successful in developing the characteristics that were measured.

The purpose of this research was to determine whether or not a training programme which was based on the principles of social constructivism was effective in developing individuals for the future world of work. In order to determine this, a psychometric test battery was used to measure a number of constructs that are related to what certain scholars regard as being characteristics needed for success in the future workplace.

First, we need to acknowledge that the idea of a ‘future workplace’ will remain (at best) a hypothetical one that will keep on changing as we move towards it. Second, we have to admit that many questions regarding developing people for this workplace still remain unanswered as this study indicated only the development of adaptability, creativity and self-acceptance while a score of other characteristics and competencies will very probably also be required in future workplaces. However, this study does add value in that it extends the insights we already have in terms of human learning to the area of developing young people (in the phase between school and work) for the workplace(s) they are about to enter. As all organizations are confronted with the challenges and uncertainties of the future, the way in which we develop young people to enter these organizations should be re-evaluated on a continuous basis. This training programme used a social constructivist approach that acknowledges the fact that people learn best when doing, discovering and sharing. In this way the divide between learning and real life decreases and are we able to also develop ‘softer’ characteristics such as adaptability, self-acceptance and creativity.

This study had three main limitations. First, due to logistical constraints, the groups were not assigned randomly. Second, due to the cost of the intervention which limited the number of participants to the programme, a Solomon four-group design could not be followed. The Solomon four-group design would require an additional two control groups that did not take part in the pre-test evaluation (Trochim, 2002). This would control the effect that the pre-test might have had on the participants. The limited sample size also makes it difficult to generalize these results to the wider population.

A third possible limitation of the study is situated at a more philosophical level. The training programme that was evaluated was based on the educational principles of social constructivism – a markedly post-modern philosophy and approach. In addition to this, the context which forms the backdrop for this study is that of the workplace of the future – a context that certainly tends increasingly to the ‘post-‘ side of modernism. The study thus evaluates a post-modem training programme, based on a post-modem educational philosophy, in order to determine whether or not it effectively develops people for a post-modem workplace. However, the methodology which is applied in executing the study is very modernist and positivist in its approach. It is a methodology where human characteristics are broken down into psychometric constructs which are (almost mechanistically) measured and statistically analysed. This becomes especially problematic when one tries to link each construct measured seamlessly with the exact nuances that the literature associates with the successful individual of the future. The question is whether the world of metaphor, story and dialogical understanding can be subjected to the measurement instruments and research methods of the world of mechanistic and systematic understanding in any valid way. However, this study wishes to communicate its message effectively to the modernist remnants in our present-day institutions responsible for preparing young people for the workplace. It therefore has to be able to express itself in the language and idiom best understood by these institutions. Still, it would be interesting to see how a more qualitative or even narrative research model would have evaluated this training programme.

Further research that could add further valuable insights might be a post-post assessment to indicate the sustainability of the changes that took place. This could most effectively be joined with a qualitative assessment in order to determine the real level of success the experimental group achieved in the future workplace over a longer period of  time. From the researcher’s subjective experience of the experimental group’s development, the question of age may also be of interest. Does age have an impact on the efficiency of a social constructivist learning experience? The age-differential between the two groups was too small to make any conclusions in this regard, although there were stages in the programme where the older participants seemed to be getting more value from the process than their younger classmates.

Works Cited

Trochim, William M.K. (2001), The Research Methods Knowledge Base, Cincinnati, OH: Atomic Dog Publishing, pp.191-254.

Trochim, W. M. K. (2002), Nonequivalent groups analysis, The Research Methods Knowledge Base. Available at: <http://www.socialresearchmethods.net/kb/statnegd.htm>

Shadish, William, Cook, Thomas and Donald Campbell (2002). “Ch. 4: Quasi-Experimental Designs that either lack a control group or lack pretest observations on outcome,” Quasi-Experimentation, Dallas, TX: Houghton Mifflin.

Hsu, J. C. (1996). Multiple Comparisons Theory and Methods, (London: Chapman & Hall).

Cohen, J. (1988) Statistical Power Analysis for the Behavioral Sciences, revised ed (Orlando, Florida: Academic Press).

Grulke, W. (2001) Radical Innovation (South Africa: Thorolds Africana Books).

Brownstein, B. (2001) Collaboration: the foundation of learning in the future, Education 122(2), p. 240.

Cetron, M. (1999) Fair or foul? Forecasts for education, School Administrator, 56, p. 6.

Boyatzis, R. E. (1999) Self-directed change and learning as a necessary meta-competency for success and effectiveness in the 21st century, in: R. Sims and J. G. Veres (Eds) Keys to Employee Success in the Coming Decades (Westport, CN: Greenwood).

Branden, N. (1997) Self-esteem in the Information Age, in: The Organization of the Future (San Francisco, CA: Jossey-Bass).

Ridderstrale, J. and Nordstrom, K. A. (2004) Karaoke Capitalism (Stockholm: Bookhouse Publishing).

04 Oct 2009

Simple Essay: Tattoos

Why do people inflict pain upon themselves, especially when the process results in a permanent mark upon the body? This is a profound question. In seeking an answer, I focused initially on tattoos, and my research began by asking individuals about their chosen marks. Although some of the answers I received were well thought out and showed remarkable insight and self-awareness, many were unsatisfactory. Some replied that they “just wanted a tattoo” or “just liked the design.” My desire to probe deeper was prompted by an adamant belief that actions speak louder than words, and that often individuals are unwilling or unable to fully articulate the complex motivations for their actions.

Tattoos are prompted by “the primitive desire for an exaggerated exterior” and are manifestations of deep psychological motivations. They are the recording of dreams, which simultaneously express an aspect of the self and recreate and mask the body. As products of inner yearnings, self-concepts, desires, and magical or spiritual beliefs, designs on the human body formed by inserting pigments under the skin have been crafted by nearly every culture around the world for thousands of years. Definitive evidence of tattooing dates to the Middle Kingdom period of Egypt, approximately 2000 B.C., but many scholars believe that Nubians brought the practice to Egypt much earlier. There was little anthropological attention to tattooing in the early part of the century because of preconceived notions of its insignificance to cultural analysis. Archaeological evidence indicates that the Maya, Toltec, and Aztec cultures performed tattooing and scarification, and that the practice is thousands of years old in Asian cultures.

Although tattooing was practiced in pre-Christian Europe, the word tattoo does not appear in English until Captain John Cook imported it after a journey to the Pacific Islands in the eighteenth century. Captain Cook claimed the Tahitians used the word tatua, from ta, meaning “to strike or knock,” for the marks they made upon their bodies. Captain Cook recorded this word as “tattaw.” The Polynesian word tapu, from which the word taboo derives, indicates the status of the person while being tattooed. Although no connection has been made between the words tattoo and taboo, it seems highly likely that they are related. While enduring the process of acquiring socially meaningful marks, the tattoo is being formed and shaped into an acceptable member of society. Prior to the completion of the tattoos the person is not only physically vulnerable because of the possibility of contamination during the penetrating process of tattooing but symbolically vulnerable as well. No longer without a tattoo, but without a finished tattoo, the person’s body and therefore the self are not yet completed. The person is a liminal entity not yet in society and therefore taboo.

Although the origin of tattooing is uncertain, anthropological research confirms that tattooing, as well as other body alterations and mutilations, is significant in the spiritual beliefs of many cultures. Various peoples tattoo or scarify during puberty rituals. In traditional South Pacific Tonga society, only priests could tattoo others and tattoos were symbolic of full tribal status. Eskimo women traditionally tattooed their faces and breasts and believed that acquiring sufficient tattoos guaranteed a happy afterlife. In many African cultures scars indicate social status and desirability as a marriage partner. Scarification patterns often identify the bearer as a member of a specific village. Many of these practices are changing and fading as Western influences enter African cultures.

Until the mid-nineteenth century, Cree Indians living on the Great Plains tattooed for luck, for beauty, and to protect their health. Cree men with special powers received tattoos to help them communicate with spirits. A dream conferred the privilege of receiving a tattoo, which would be inscribed during a ceremony conducted by a shaman authorized to tattoo. The tattooing instruments were kept in a special bundle passed on from shaman to shaman. The ability to withstand the painful and tedious process of tattooing, which often lasted two to three days, confirmed the tattoo’s courage. Blood shed during the process was believed to possess magical power and was absorbed with a special cloth and kept for future use.

In a Liberian initiation ceremony “the novices … are resuscitated to a new life, tattooed, and given a new name … they seem to have totally forgotten their past existence.” (Armstrong, M. L. 2005) The ritual recreates the flesh bequeathed to initiates by their parents and experienced during childhood. The physical change marks a symbolic rebirth into a new spiritual, social, and physical reality as well as a real physical change. This magical use of the body reiterates the idea that physical and spiritual existence and their interactions are deeply entwined.

The American association of tattooing with exoticism solidified in 1851 when Dan Rice hired a tattooed man named James F. O’Connell to appear in his circus. During this time Rice was also fascinating America with another body image in popular culture, the blacked-up minstrel. The minstrel representation of the black body was replete with complex meanings of manhood, race, and class. The tattooed body on display was probably less familiar but equally intriguing. Without evidence of what kind of tattoos Rice’s employee had, or whether or not he performed, or served only as a display object, it is difficult to assess the meaning of his existence. Perhaps O’Connell conjured images of a white savage, halfway between the articulate, civilized white man and the Native American who expressed his culture with paint and body markings. Perhaps audiences saw the tattooed man as Melville’s Queequeg incarnate; exotic, half-blackened with ink-and half‐ black, but not without feeling or humanness. P.T. Barnum followed Rice’s success by displaying an elaborately inscribed Albanian named Constantine, who was an extremely popular attraction. Barnum was the first to exhibit a tattooed woman, in 1898, which added the erotic element of viewing the female body.

During the latter part of the nineteenth century as the public became more familiar with the art of tattooing through the circus, which was primarily a working- and lower-class entertainment, tattoo was also developing commercially. The first known professional tattooist in the United States was Martin Hildebrand who had an itinerant practice during the Civil War and opened a shop in New York City in the 1890s.

At the turn of the century, tattoos showed up in titillating and disreputable places. Tattooing became a shop-front industry in the disreputable Chatham Square area of New York City. Electric tattoo machines made tattooing cheaper and less painful and good tattoos easier to render. With this new technology, tattooing became popular among the lower classes and quickly came to be associated with blue-collar workers and ruffians. Although tattooing was an upper-class trend for a brief period, by the 1920s the middle class considered it deviant. Tattoos were considered a decorative cultural product dispensed by largely unskilled and unhygienic practitioners from dingy shops in urban slums, and consumers were seen as being drawn from marginal, rootless, and dangerously unconventional social groups.

In the 1930s, the American fascination with body alteration as a deviant practice continued. During this time a psychiatrist and writer named Albert Parry often wrote about the significance of tattoos and embedded stereotypes of deviance in the public discourse. Although Parry was an avid fan of tattooing, and bemoaned its decline in popularity, he called tattooing a “tragic miscarriage of narcissism.” He claimed tattooing was a substitute for sexual pleasure, evidence of homosexuality, and a source of masochistic pleasure. (Huxley, C. and Grogan, S. 2005)

Parry associated tattooing with deviant sexuality. Although the exhibition of a tattooed woman in the circus in prior decades was tinged with a hint of sexual voyeurism, Parry explicitly constructed images of tattooed women as abnormal and accessible commodities. He claimed that five percent of American women were tattooed and insinuated that beneath their conventional clothes, these disguised women had marked their bodies with signs of desire and erotic adventure. Parry stated that:”Prostitutes in America,as elsewhere,get tattooed because of certain strong masochistic-exhibitionist drives. Prostitutes obtained tattoos because they desired yet another reason to pity themselves and were seeking to be mistreated by clients. They believed tattoos would prevent disease and that they obtained sexual pleasure from the tattoo process. (Stirn, A, 2003)

Tacitly based on the preconception that marking the body is deviant, psychologists have sought to determine a connection between tattoos and psychopathology. Members and potential members of the military who bear tattoos have served as subjects for several studies that correlate tattoos and social adjustment. A study in 1993 concluded that “psychopathology or social or emotional maladjustment is significantly higher among tattooed than among non tattooed men.” (Jenn Horne, 2007) A 1998 study concluded that sailors with tattoos were more likely to be maladjusted, and military men with “Death before Dishonor” tattoos were more likely than non tattooed sailors to be discharged from the service. Other studies conducted during the late 1980s link tattooed women with homosexuality and masochism and tattooing practices in institutions with high levels of aggression, sexual insecurity, and social maladjustment. These studies both preselected the subject pools and ignored the effects of the institutional milieu on the tattoos. (Jenn Horne, 2007)

Similar to inmate self mutilation, tattooing may provide relief from the numbness of incarceration and establish individual or gang identity. A 1994 survey of the public perception of tattooed persons revealed that a majority of people perceived tattooed individuals as physically strong and psychologically aggressive. This survey concluded that whether or not tattoos are indicators of social maladjustment, they may function to enhance the bearer’s self-image and integrity. Returning to the theory of confirmation of the self in a pain-enduring sadomasochistic interaction, one can understand the connotation of toughness and integrity that a tattoo confers. One psychoanalytic case study observed that a dominatrix in a sadomasochistic relationship bore her tattoos as evidence of her ability to manage the ritual infliction of pain adroitly. This self mastery and “toughness” earned her the right to control her submissive partners and proved her ability to alter her own and her partners’ consciousness and identity. (Jenn Horne, 2007)

Conclusion

The lack of understanding of the functional purposes of both the tattooing process and the final marks have led to a perception of tattooing as barbaric, deviant, and sexually perverse. Dominant American culture has considered tattoos as marks of degradation, criminality, and marginality. Without an understanding of manipulation of the body to inspire “sacred awe” in viewers and bearers of tattoos and other body alterations, one can not grasp the significance of these alterations as tangible establishment of personal, spiritual, and social identity.

Although body modifications such as tattooing and piercing have been construed as signs of deviance, during the past two decades body alteration has begun to filter into mainstream culture as a popular form of self-expression. Articles about tattooing and piercing proliferate in popular literature. Fashion magazines show models with tattooed ankles and pierced navels, and recruit well-known tattooed musicians for their pages. Children are able to play with tattooed dolls. Exhibits of tattoo art are shown in art galleries. Piercing boutiques and tattoo shops are conducting brisk business.

Reference

Armstrong, M. L. (2005) Tattooing, body piercing, and permanent cosmetics: A historical and current view of state regulations, with continuing concerns. Journal of Environmental Health, 67.

Huxley, C. and Grogan, S. (2005). Tattooing, piercing, healthy behaviors and health value. Journal of Health Psychology 10.

Stirn, A. (2003) Body piercing: Medical consequences and psychological motivations. The Lancet, 361.

Jenn Horne (2007) Article Title: Tattoos and Piercings: Attitudes, Behaviors, and Interpretations of College Students. Journal Title: College Student Journal. Volume: 41. Issue: 4.

26 Jun 2009

Sample Essay: Healthcare Ethics

Introduction

Ethics speaks primarily to the right and wrong in human relationships. It is from the study of ethics and, consequently, a better understanding of moral principles, that society may hope to enhance the sense of tolerance, fairness, compassion, and sensitivity to another’s pain and thereby improve aspects of human behavior that place humans in a separate niche in biological history. Ethical questions, especially as they apply to medicine, have become common topics of discussion during the past twenty years. Bitter disputes have arisen regarding abortion, suicide, human experimentation, as well as the management of the dying patient and the severely disabled newborn. Today, the American health care has grown into a multi-billion dollar industry controlled by large corporations. It is clear that various ethical issues associated with delivering heath care services to patients get overridden by monetary interests.

Economic / Financial

In the past 50 years, few sectors of the U.S. economy have escaped the periodic ravages of recession. The health-care industry is a notable exception. Since 1950 hospitals and other health-related enterprises have experienced uninterrupted, indeed meteoric, expansion. Between 1950 and 1982 national health expenditures increased more than 25-fold, reaching $322 billion per year, and the proportion of the GNP accounted for by the health sector increased from 4.4 to 10.5 percent. During the 1970s health-care employment increased from 4.2 to 7.5 million workers, accounting for one seventh of all new jobs in the United States. Moreover, these trends continued through the recession of the early 1980s and 1990s. The fast pace of hospital expansion is indicated by the fact that in 1980 the average age of hospital capital assets stood at an all time low of 7 years, as compared to 15 years for the service sector as a whole and 23 years for capital in manufacturing industries.

Strikingly, the conquest of the main killers of the young (infectious diseases) was largely complete in the United States by the time the health-care sector began its explosive growth, and was clearly due to improvements in the standard of living and public health measures rather than curative medicine. The spectacular expansion of health facilities which occurred after the era of the main advances in life expectancy has been accompanied by massive government spending on curative medical care, a singular neglect of public health and preventive measures (which currently account for less than 3 percent of health expenditures), and very modest improvements in health. Moreover, many Americans lack access to the most basic medical services. The United States shares with South Africa the dubious distinction of being the only developed countries without universal health insurance. Despite the widely heralded Medicaid and Medicare programs, 25 million Americans lack health insurance of any kind, 40 percent of infants and toddlers are not fully vaccinated, and the elderly now spend as large a proportion of their incomes for health care as they did before the passage of Medicare. The paradox of vast increases in health care resources which are funded largely by the government yet fail to provide the services most critical to the improvement of health puzzles bourgeois health-policy analysts. An understanding of the role of health care in the accumulation of capital can help to unravel this mystery, forecast future trends, and focus the work of the left in this field.

Economics of health care

In the best of all possible worlds both economic efficiency and commitment to the individual patient would govern the delivery of medical care. In the real world the conflict between these two factors is increasingly disrupting the health professions. On the one hand, the traditional primary allegiance of health-care providers is to their individual patients: they cannot deny patients something they think would genuinely help. The doctor’s Hippocratic oath illustrates this. On the other hand, we all endorse the sensible goal of economic efficiency, getting the greatest value from our resources. We want to get the most health for our health-care dollars and to obtain as much value from every extra bit we invest in medicine as we could get by using the required resources on something else entirely. In economists’ jargon, we want to minimize our opportunity costs.

This conflict is sharpened considerably by the financial context of modern medicine. Once a patient is insured, the patient’s interest typically lies in receiving the best possible medical care regardless of whether the resources thus used might produce greater benefit elsewhere. The moral and professional allegiance of clinicians then seems to collide head-on with wider economic efficiency. Efficiency will sooner or later call for restricting care that would benefit individual patients. This is “hard efficiency”: it is surely not just the elimination of waste, and it leaves the health economist at seemingly irreconcilable odds with clinicians and their oath.

When each side in such a stark conflict has so much intuitive appeal, it is hardly surprising that the ensuing debate is heated. The traditional conception of loyalty to the individual patient even if that leads to sacrificing some of the overall value of resources gets ardently defended even by those who call for identifying and curtailing unnecessary procedures.

The push for hard efficiency won’t abate easily. Even if unnecessary procedures are better identified and curtailed, eradicating them is only a onetime saving. All historical trends point to continued growth in doubts about whether any and all care that serves the medical need of the patient is worth the money it costs. Randomized clinical trials will be telling more, not less, about the touch-and-go margins of effective care — what might possibly do a patient some good but is statistically and economically a dubious bargain. Both the range of options provided by medical technology and the age of the population will continue to increase faster than per capita income. Healthcare pressures on government and business budgets will grow, not diminish.

The lines of the continuing prospective debate may thus seem drawn already. Shouldn’t we come clean in our ethics and either honestly sacrifice commitment to the individual patient or frankly relent in the push for efficiency? Yet such a choice is bleak; we will swallow neither option without a moral gasp. Maybe we can avoid having to abandon either side, or perhaps we can at least reduce the force of their collision. Clinicians should be able to keep their oath of commitment to patients while at the same time taking much of larger economic efficiency to heart.

Politics

This century has witnessed the transformation of American medicine from cottage industry to large-scale capitalist enterprise. The health-care industry has received massive government subsidies, and has been largely exempt from the competition characteristic of many other sectors. Until now, any medical product has been guaranteed a virtually unlimited market. However, this expansion is giving rise to a contradiction between the health-care industry and other industries. In the past the capitalist class encouraged the growth of the health sector, but as the health-care industry expands, employee health benefits increase in cost and have by now become a major cost of production. In 1980 U.S. corporations were spending more than $65 billion a year for employee health benefits, and Chrysler executives were complaining that Blue Cross had become that company’s largest supplier. Corporate leaders have also begun to express concern about the increasing proportion of state and federal budgets devoted to Medicaid and Medicare.

Soaring health-care costs thus are becoming a major concern for the capitalist class as a whole. It is moving to curtail the special privileges which have allowed the health-care sector to reap greater profits and expand more rapidly than the rest of capitalist industry. The entry of the Business Roundtable into the health-care scene coincided with the start of government initiatives to end the privileged position enjoyed by the health-care industry. Federal grants for hospital capital projects were phased out starting in 1975, though the hospital industry was easily able to compensate for this loss by increasing its used of tax-exempt bonds. Other early efforts to contain health-care costs under the Nixon and Carter regimes were equally unsuccessful. However, more recent government attempts to rein in the health-care sector have more bite. As a reporter for the Boston Globe remarked, “The Business Roundtable’s decision to get serious about hospital cost control marked the turning point in the debate. Until then, the issue had been the province of insiders, the most influential of whom had more of a stake in the status quo [the continuing expansion of the health-care industry] than in cost containment.”

Under pressure from the business community, New York, New Jersey, and Massachusetts, among other states, have passed legislation sharply limiting hospital revenues. The Reagan administration has proposed the elimination of tax exemptions for health insurance and hospital bonds. Medicaid payments have been drastically cut, making it difficult for hospitals and doctors to profit from Medicaid patients. Perhaps most important, the basic structures of the payment systems for Medicare and Medicaid are being radically altered. Hospitals will no longer be paid for each individual test, procedure, or day of care. Instead, under Medicare the hospital will receive a fixed lump sum determined by the patient’s diagnosis. For instance, Medicare might pay a hospital $2,800 to care for a patient with pneumonia, no matter how long he or she stayed in the hospital or what tests or drugs were used. For the first time hospitals will profit by providing fewer services to each patient.

The incentive to do less is of course not new in health care. It is a central feature of so-called Health Maintenance Organizations (HMOs) which are increasingly popular among large corporate employers. The first, and still largest, HMO was started by Kaiser Industries to lower the costs of health care for its workers in massive New Deal construction projects like the Grand Coulee Dam. HMOs collect a lump sum in advance which covers all services delivered. Physicians are salaried and often share in any profits realized because of savings on patient care. Contrary to much recent propaganda, HMOs usually neglect preventive care, since cost savings due to prevention are most often realized after the patient has retired (either because of age or disability) and hence has lost his/her membership in the HMO. HMOs economize by emphasizing less personal and lower cost “industrial” style care, and erecting barriers to access which discourage members from seeking care.

Thus both government initiatives and corporate policies are forcing changes in the organization of health services. Capitalist rationalization and efficiency is replacing the ideology of “care and cure no matter the cost.” And changes on the horizon promise to complete the transformation of medical care into commodity production. Health care will be a product offered for sale on the market. Hospitals and HMOs will compete in offering corporations, the lowest priced health care which will maintain the productivity of their workers, and government the barest essentials of health care which will keep the unemployed, disabled, and retired from revolt. The state seems ready to forsake its role in assuring that health care resources are provided where needed. As a recent study by the Brookings Institution put it, “Congress has abandoned the principle that medical care should be provided whenever it is needed, that cost should not be considered when life or health is at stake.”

Competition

Medicine will increasingly be asked, “Does your practice improve the productivity and tranquility of the work force?” No more will doctors and hospitals be allowed to collect for every useless operation and superfluous machine. No more will health care for the “non-productive” poor, disabled, and elderly be lavishly financed by the state for the benefit of the private health-care sector. To be sure, the health-care industry will be allowed a handsome profit, but one in line with the rest of capitalists industry.

Already care for patients with a particular diagnosis is referred to as a “product line,” and administrators vie to manipulate these “product lines” to maximize profits and assure the survival of their hospital in the increasingly competitive hospital market. The extent to which the ideology of commodity production has come to dominate health-care administration is evident in the titles of some recent articles in the hospital administration trade journal Modern Healthcare: “Surgical Lasers Can Generate Profit If Volume of Use Can Be Guaranteed,” “Baxter Shows Hospitals How to Use Cost Data to Prepare for Price Competition,” “Managing Along Product Lines Is Key to Hospital Profits Under DRG [Medicare] System,” “Fixed Payment Rates Force Hospitals to Reassess ICUs,” and “Medical Records’ New Financial Role Dramatically Shifts Hospital Priorities.” This last article explains that under new insurance regulations, “The medical records department supplies the base information to interpret the medical stay into a financial picture,” and anticipates that “helping their hospitals collect billions of dollars in federal reimbursement will become the top priority of medical records departments. In the past, the department concentrated on maintaining accurate record for ongoing patient care.”

Social

The commodification of health care will have important repercussion for health-care workers and patients. Competition among health-care institutions will increase and result in the elimination of smaller scale, more personal and human sources of care. Health care will be monopolized by large corporations, employing thousands of workers organized to deliver care in an increasingly mechanized factory-like environment, with little human contact or understanding. Thus the familiar petty-bourgeois local doctor is already being replaced by “MediStop” centers staffed by anonymous employees providing technical interventions which keep working people at work and treat others as cheaply as possible. Care of the “non-productive” poor and elderly, and interventions aimed at improving quality of life or psychological well-being will receive short shrift. Public-health efforts to prevent the main modern-day health problems, such as heart disease and cancer, are likely to be crippled because such chronic diseases primarily affect workers in their non-productive post-retirement years.

Health-care workers will find themselves cogs within huge and growing enterprises. Administrators armed with elaborate computer systems will monitor and control day-to-day medical practice, dictating what tests and treatments are allowed for a given “product line” (disease). Physicians, in the past independent entrepreneurs, will serve as highly paid supervisors. For the first time, in 1982, more doctors in the United States were salaried than self-employed. For non-physician personnel the changes may be more painful if less dramatic. Hospitals will have more incentives to limit labor costs by holding down both wages and the number of workers. The proportion of health spending devoted to labor costs, which fell by more than 10 percent during the 1970s, will fall even more rapidly as machines replace many workers. Unions will increasingly be under attack.

Technology

The complexity of ethical dilemmas and resultant clamor will become more bewildering as technology continues to advance. We are facing a time when an egg and a sperm with desirable genetic attributes will be brought together in a Petri dish, nourished until transplanted into the uterus of a future mother, all according to parameters spelled out in a computer containing lists of potential donors. The ethical issues surrounding a surrogate mother are simple compared to those that all of us will face in the future. It has become a moot point whether ethically we should or should not move along certain potentially dangerous lines of research. The fact is that we will continue all areas of research and thus will be in need of greater understanding of ethics involved in the application of our knowledge. In order to clarify the fundamental premises upon which ethical decisions must rest, we need to stand back and assess our situation, free from the burdens of tradition, dogma, or gut reaction that limit our thinking.

As K. D. Clouser so pointedly expresses it, “Medical ethics is no big deal … it is simply ethics applied to a particular area of our lives … it is the ‘old ethics’ trying to find its way around in new, very puzzling circumstances.”[1]

Conclusion

The health care industry is one of the biggest and profitable business sectors in America. Americans spend more and more money on health care year after year, yet, the quality of services may not increase so dramatically and the ethics of medicine usually tends to be ignored in such a thriving sector of the US economy. The enormous sums now spent on health care are sufficient to assure health workers a decent standard of living, provide high quality curative medical services to all in the United States, enormously increase efforts in prevention and relevant research, and meet our now neglected international responsibilities in health. The issue is not lack of money but capitalist irrationality and waste: the insurance industry and armies of administrators which together devour 25 percent of all health-care spending, the billions of dollars of profits and advertising by drug and equipment suppliers, the massive duplication and maldistribution of facilities, and the greed and ideological bias of physicians which leads to millions of unneeded operations and tests, the proliferation of capital intensive treatments, and the neglect of non-technical (and hence unprofitable) therapies. Still, I sincerely believe that the pressure coming from various social groups is having and will have an even greater impact on the development of health care ethics, therefore, improving the quality of medical services.

Bibliography

K. Danner Clouser, “Medical Ethics: Some Uses, Abuses, and Limitations,” The New England Journal of Medicine 293, no. 8 ( August 21, 1975): 384.

Expenditures by John K Iglehart (The New England Journal Of Medicine, Jan. 7, 1999.  Volume 340:70-76).

Health Premiums to Jump Again Next Year: Insurance Rate Hikes in Area, Nation Likely to Be in Double Digits, Data Suggest by Bill Brubaker (Washington Post, June 24, 2003; PageE04).  Based on data from multiple sources, it appears that this will be the fourth year of premium increases in a row.

Why Are Local Healthcare Costs among the Highest in the Nation?  By John Dorschner (The Miami Herald, July 21, 2003).  An examination of the possible factors involved in high cost of health care in South Florida.

Health-Care Costs to Rise in 2004: Employers are expecting increase of 12%, fifth year in a row of double-digit gains by Vanessa Fuhrmans (The Wall Street Journal, September 29, 2003).  Many employers will shift some of the cost to employees.

Healthcare Costs Are Up.  Here Are The Culprits. By David R. Francis (The Christian Science Monitor, December 15, 2003).  Overview of the parts of the health care system that have become expensive.

Tough Trade-offs: Medical Bills, Family Finances and Access to Care by Jessica H. May and Peter J. Cunningham (Center for Studying Health System Change, Issue Brief number 85, June, 2004).  This study examines the difficulty both insured and uninsured Americans have in paying their medical bills, and how that difficulty affects access to care.


[1] K. Danner Clouser, “Medical Ethics: Some Uses, Abuses, and Limitations,” The New England Journal of Medicine 293, no. 8 ( August 21, 1975): 384.

Place Your Order Now
Academic Writing Services:

Business / Professional Writing Services:

Free Essay Tips / Writing Guides:
Tags:
100% Satisfaction Guarantee

We will revise your paper until you are completely satisfied. Moreover, you are free to request a different writer to rewrite your paper entirely, should you be unhappy with the writing style, level of research, communication, etc.

100% Authentic Research & Writing Guarantee

We guarantee that you will receive a fully authentic, 100% non-plagiarized work. Otherwise, we will just give you your money back.

100% Confidentiality & Privacy Guarantee

No one will ever find out that you have used our service. We guarantee that your personal information as well as any other data related to your order(s) will remain confidential to the extent allowed by law. It will not be shared with any third party unless you provide a written consent.