Tuesday, August 25, 2020

Identify the different Types of Charters available to contract a Assignment

Distinguish the various Types of Charters accessible to get a vessel, and quickly clarify the striking highlights of the various Types of Charters recognized above - Assignment Example rouping of agreement, an exposed pontoon contract, the vessel proprietor leases the unblemished stage, and the charterer has the risk of in administration it as despite the fact that it were his have transport. As the name involves, the uncovered boat is contracted. The vessel owner has discard impact of the vessel in hold up of the stage along with this through the sanction gathering.2 The charterer pays the unblemished working consumption oil, supplies, necessities, port contribution, pilot age, etc, just as utilizes and pays the group. However, there conceivably will be a fragment in the agreement assembling that is the ace and the pioneer engineer must be acknowledged by methods for the vessel owner. Gathering is a normal marine articulation and is indistinguishable by methods for understanding or agreement. 3 Agreement and sanction are various types of understanding arrangement task decisions which a provider offers. Client holds out vessel; client capacities plan errands. Surprisingly a subsequent client offers vessel; clients takes out providers groups program assignments and capacities. Temporary worker performs plan assignments decisions contractual worker offers. Client performs vessel and capacities plans missions it. Temporary worker accomplishes program turnkey.4 The charterer is responsible for the continuation, preservation, and security of the vessel. Past to discharge to the charterer, the boat is investigated through agents of commonly parties and the comparable is set up on redelivery. The agreements parties’ will specify that the boat must be redelivered in the comparative great plan and circumstance as shipped, by methods for exclusion of regular convey and shared. Contractual worker capacities transport offers through client decisions and are contrasts by methods f or the specific separation being that the client, roughly totally the brought together government, has offered the vessel. There are various shifted purposes behind this comprehension. One is a final product of weight from office of the executives and

Saturday, August 22, 2020

How an HR Practitioner ensures the services they provide are timely and effective Essay

Organizing Conflicting Needs The necessities of clients may here and there be clashing (for instance, chiefs need creation results and longer working hours though representatives need additional downtime and spotlight on work/life balance). HR would find out which requests were the most pressing and significant, considering the simplicity and speed of managing each issue while keeping up center around the general needs of the association. It is critical to keep all clients educated regarding what HR can give in the method of administrations and set practical desires. HR should be adaptable, simple to contact and ready to react quickly and viably. Anyway on events where the customer’s need can't be managed expeditiously, a full clarification must be given alongside evaluated timescales for goals. Viable Service Delivery Conveying Service On Time By organizing needs, HR can guarantee that issues are taken care of as indicated by desperation. For instance, considering the impact of each solicitation on the business and considering: Keeping up the prosperity of workers Authoritative crucial, and qualities Fulfilling execution needs Current enactment Fulfilling the requests of inward partners (representatives, board individuals, and the executives) and outer partners (worker's organizations, investors, accomplices, work candidates). HR great practice would include building up a case record that could be surveyed to check progress, taking a gander at zones of duty and assignment of undertakings to guarantee needs are managed reliably. Plans would be investigated and refreshed at standard interims to check advance and think about any adjustments in the circumstance. Conveying Service on Budget Consistently HR must think about the money related ramifications of conveying administration by liaising with fund/accounts divisions and guaranteeing administration is given inside spending impediments. It is additionally imperative to have an away from of all assets accessible to the association to ensure against superfluous spending. For instance, consider utilizing in-house benefits as more savvy than purchasing administrations in from outside the association. Managing Difficult Customers Managing troublesome clients can have a scope of suggestions to staff and the association. HR needs to consider: Where troublesome client conduct may emerge and where it would be viewed as a hazard Suitable help for staff and supervisors taking care of troublesome clients in accordance with organization methodology, for example, case meetings or direction. Consider the requirements of outside clients including associations and temporary workers. The most as often as possible detailed troublesome client practices are: Verbal Abuse †swearing, contending, hostile comments. Antagonistic Behavior †non-verbal communication, compromising signals. Physical Abuse †that may bring about injury Thoughts for managing troublesome clients may include: Keep succinct records and guarantee these are examined transparently with the client. This guarantees they know that their conduct will be on record and they can't deny their activities later on. Acclimate to their character; convey such that accommodates their character to cause them to feel progressively good and keep away from encounter. Continuously follow right authoritative method. Clients will be less inclined to contest activities taken in accordance with direction/law. Pose inquiries, listen cautiously, show an enthusiasm for the individual, use non-undermining non-verbal communication and keep in touch. Keep level head and don't react to their negative feelings or misuse. Never make guarantees! Taking care of and Resolving Complaints HR will deal with protests on a formal or casual premise. Every circumstance must be managed speedily as it emerges and be taken care of in a reasonable and reliable way. Ordinary mediations or an open entryway strategy can urge workers to discuss issues before they raise. HR ought to unmistakably convey the strategy for raising a complaint (eg casual objections, composed protests, how grumblings might be heightened and evaluated timescales). Clarify that the association esteems it’s clients and wishes to determine any issues that may emerge. Guarantee clients feel guaranteed that their issues will be paid attention to and managed secretly and urge clients to input any issues before they increase. Techniques for Communication Compelling correspondence between all partners is crucial to guarantee every single invested individual are educated and associated with the dynamic procedure. The technique for correspondence utilized relies upon the clients needs, the kind of data and how much data they need, and how the client is probably going to respond to the data too. (Awful news is best passed on inâ person as opposed to recorded as a hard copy to permit questions and conversation to happen). Three Different Communication Methods Strategy for Communication Points of interest Burdens Email Quick and advantageous Can be sent whenever of day/night Modest Can be sent to people or gatherings Append records and offer data Can be scrambled to send private data Affirmation of conveyance/perusing can be set up Discussion/data is recorded as a hard copy Beneficiary has the opportunity to react Depends on beneficiary approaching email account Not appropriate for bunch conversations Less close to home and may prompt misconception May have long hold up before getting an answer PC infections Phone Effectively available to a great many people all over the place (portable) Discussion can be private, or telephone call Perfect if a quick reaction is required Messages can be left on answerphone Individual might be locked in/have no sign, so incapable to accept call Mobile/Overseas calls can be costly It is difficult to record the discussion Spoken data just, can't share pictures, reports and so forth Can't decipher non-verbal communication Eye to eye Quick criticism Can peruse non-verbal communication or outward appearances Can share archives/pictures and examine Fabricates more grounded connections Useful for sensitive circumstances Coordinations may demonstrate troublesome/costly to get members together in one spot No record except if note-taker present, so discussion not responsible Discussions may get warmed Reference http://www.teach-ict.com/gcse_new/correspondence/comm_methods/miniweb/pg3.htm http://businesscasestudies.co.uk/hmrc/getting-the-message-over the-significance of-good-interchanges http://davidlivermore.hubpages.com/center/Difficult-Employees

Tuesday, August 11, 2020

Win Sam Stuff!

Win Sam Stuff! Waaaaaaaah! Youve moved on to 13 already!? I dont feel loved anymore! Waaaaaah! Yikes guys, chill! Youre like, 2 months away from MIT, its time to accept the fact that another class is about to start applying. I know its fun to complain on the internet, I do it all the time, but its time to move on. :) Speaking of classes applying to MIT for next year, I got back from work yesterday and spotted a tour group making the rounds so I hopped in and joined them. Its always nice to hang out with prospies and tell them how cool MIT is. Id been on the tour for, oh, idk, maybe 2 minutes, when the tour guide stopped, had everybody turn around and said Hey everybody, this is Snively. Everybody say Hi Snively! There was a very confused Hi . . . Snively? from the tour group as I stood there awkwardly, not quite sure what to do with myself. The tour proceeded and I actually quite enjoyed myself despite the fact that half way through I broke my permanent retainer on an ice cream sandwich. Still trying to figure out what to do about that but for now I have broken metal chillin in my mouth. I got back to my room and decided to spend some time exploring my external hard drive (a device which made losing my internal hard drive a couple of months back much less painful) and I found some fun videos that, unfortunately, have very little to do with MIT. But, through a great series of connections that Ive managed to make, I think Ive found a way to post all of the videos in such a way that not only do they tie back to MIT but they are all interconnected into themselves and form a feng shui of Youtube. First: I was on Youtube the other day and on the featured videos was our friend Matt Harding with an all new dancing video! Second: During CPW there was a certain hack that involved a certain 80s music video. Dan Sauza 11 and I both realized that it would just be unfair to walk by this hack without a small tribute dance, which we filmed. As you can see, were just as bad at dancing as Matt Harding. Third: Take note of Matt Hardings location at 3:14 *cough*pi*cough*. . . Sam Maurer came to visit last year and he and Mason performed a lovely rendition of the Sad Sad Toaster song from http://www.songstowearpantsto.com (ignore Laura yelling that its boring). I would like to take this opportunity to shamelessly plug this toaster video. Its been entered into a contest and if it has the most YouTube views by July 15th then Sam and Mason win all sorts of stuff and get to help record a new song for SongsToWearPantsTo. Go ahead and blog it, share it with friends, and enjoy the toastiness. On a sidenote, Interphase is getting ready to gear up. I can tell because there were an unusual amount of kids with parents milling around today. If youre in Interphase and youre ever bored during the weekend shoot me an e-mail and Ill show you around. snively [at] mit [dot] edu So long for now, Ill try to come up with something else to blog shortly.

Saturday, May 23, 2020

Merger of JP Morgan Chase Co - Free Essay Example

Sample details Pages: 8 Words: 2485 Downloads: 10 Date added: 2017/06/26 Category Finance Essay Type Argumentative essay Did you like this example? Executive Summary This paper on the Banking industry consist the merger of JP Morgan Chase Co. It argues that the experience of Banking industry in the US is unique and also the impact of the merger in JP Morgan Chase Co. It is not paradigmatic also tells that all banks are not driven efficiently. The paper talks about the merger of JP Morgan Chase Co. using The Porters The Fishbone Model. Table of Contents Executive Summary 1 Table of Contents 2 1. INTRODUCTION 3 1.1 Overview of Banking Industry in US 3 1.2 Overview of JP Morgan and Chase 3 2. STUDY OF MERGER BETWEEN JP MORGAN CHASE (2000) 4 2.1 Purpose of the study 5 2.2 Significance of this study 5 2.3 Limitations 5 3.RESEARCH MODEL 6 3.1. The Fish Bone Model 6 3.2 Elements of the Model 7 3.3. Previous Research Findings 8 3.4. Critics for the Previous Research 8 4.PREVIOUS RESEARCH METHODOLOGY 9 5.CONCLUSION 10 Don’t waste time! Our writers will create an original "Merger of JP Morgan Chase Co" essay for you Create order 1. INTRODUCTION 1.1 Overview of Banking Industry in US This paper on the Banking industry consist the mergers of banks with a special emphasis on the US banks. It argues that the experience of Banking industry in the US is unique and it is not paradigmatic also tells that all banks are not driven efficiently. Mergers in banks arise because of macro structural circumstances and shifts to strategic motives in a period of time (Benston, Hunter, Wall, 1995). Over the few years, bank mergers and acquisitions have been occurring at a very high rate. During the recent decades the US banking system is experiencing an intense structural change which is happening at a very rapid place. When banks document deposits made by customers create credit evaluations and move funds they process information. The banks and the financial services industries entrants have been very much affected by the current information processing revolution. The banks are moderately transforming themselves from intermediaries that have loans, deposits and securities in their balance sheets into brokers who originate loans and then allocate them to others who obtain securitized assets. This change has occurred due to rapid increase of the technical advancements in processing information. 1.2 Overview of JP Morgan and Chase JPMorgan Chase Co. is one of the worlds largest, oldest, and best-known financial institutions. Since their founding in New York in the year 1799, they have succeeded and grown by listening to their customers and also by meeting their needs. Being a global financial services firm and with operations in more than 50 countries, JPMorgan Chase Co. combines two of the worlds best and premier financial brands: J.P Morgan and Chase. JPMorgan Chase Co. is a leader in financial services for consumers; investment banking; financial transaction processing; small business and commercial banking; private equity and asset management. JPMorgan Chase Co. serves millions of consumers in the United States and also the worlds most prominent corporate, institutional and government clients. JPMorgan Chase Co. is built on the foundation of more than 1,000 predecessor institutions that has come together over the years to form todays company. Their many well-known heritage banks include J.P Morga n Co., The Chase Manhattan Bank, The First National Bank of Chicago, Manufacturers Hanover Trust Co., Bank One, Chemical Bank and National Bank of Detroit, each closely tied in its time for innovations in finance and for the growth of the United States and global economies. (The History of JP Morgan Chase Co., 2008) 2. STUDY OF MERGER BETWEEN JP MORGAN CHASE (2000) On examining, there are four main paths are identified which explains explains the reasons behind the mergers activity. These paths are related to (1) creating economies of scales, (2) expanding in geographically means, (3) increasing the combined capital base (size) and product offerings, and (4) gaining the market power. In examining these paths, it appears that, at a much higher level in Porters fishbone framework, the mergers are driven by cost reductions than increasing the gross revenue. Global consolidation and Downsizing allowing banks in increasing its size and market capabilities while creating some technological efficiencies largely responsible for the cost savings of mergers. The research results on the financial performance of the merged banks have resulted in conflicting conclusions. While some research has found that bank acquisitions are not improving the financial performance of the combined banks (Baradwaj, Dubofsky, Fraser, 1992). When Chase Manhattan anno unced its merger with J.P. Morgan in September 2000, the companys shares were selling at $52. (Palia, 1994). Today, they make around $30, and the press is filled with reports of the companys performance. Getting bigger has not helped Chase Manhattan to get better. Nor has it helped other companies. The Wall Street Journal recently reported that the share prices of the 50 biggest corporate acquirers of the 1990s have fallen three times as much as the Dow Jones Industrial Average. (Toyne Tripp, 1998). The size counts, especially in addressing the complex problems that span geographies and functions. But bigger doesnt make a company better at serving customers. Chase is the product of two megadeals that came earlier, its mergers with Chemical Manufacturers Hanover and. J.P. Morgan is the part of the venerable House of Morgan which was traditionally a commercial bank, but has aggressively entered the investment banking business. After flirting with other merger partners from Europe and elsewhere, it finally offered the famous name and blue-chip client roster to its fellow New Yorker for about $36 billion in stock. (Madura Wiant, 1994) 2.1 Purpose of the study The history before the acquisition is very important to consider the enormity of the product. In 1991, Chemical Banking Corp. merged with Manufacturers Hanover Corp., keeping the name Chemical Banking Corp., then the second largest banking institution in the United States. In 1995, First Chicago Corp. merged with NBD Bancorp Inc., forming First Chicago NBD Corp., the largest banking company based in the Midwest. In 1996, Chemical Banking Corp. merged with The Chase Manhattan Corp., keeping the name The Chase Manhattan Corp. and creating what then was the largest bank holding company in the United States. 2.2 Significance of this study In 2000, The Chase Manhattan Corp. merged with J.P.Morgan Co. Incorporated, in effect combining four of the largest and oldest money center banking institutions in New York City (Morgan, Chase, Chemical and Manufacturers Hanover) into one firm called JPMorgan Chase Co. In 2004, Bank One Corp. merged with JPMorgan Chase Co., keeping the name JPMorgan Chase Co. In 2008, JPMorgan Chase Co. acquired The Bear Stearns Companies Inc., strengthening its capabilities across a broad range of businesses, including prime brokerage, cash clearing and energy trading globally. 2.3 Limitations It becomes abundantly clear that there is no clear direction in terms of the mergers and acquisitions that JPMorgan Chase Co. performed in before and after the marriage of the giants happened. The merger was hailed and appreciated at the time when one of the largest mergers was in a vogue. The merger seemed to have happened through lots of pressure from competition more than anything else. Even after these so many years of being together, it is not very easy to tell if the individual entities are acting as one. (Wilson, 2003) The problem faced is really because of cohesiveness and integration. Although the merger went through the lack of a proper regulatory authority to oversee such mergers leads to situations such as the sub-prime crisis of 2007-2008. RESEARCH MODEL 3.1. The Fish Bone Model The coding scheme adopted for the content analysis that was conceptualized in the Porter strategic model (Porter, 1980) as operationalized in a fishbone analysis framework (Nolan, Norton Company, 1986). The coding of the content of application approximates the use of a standardized questionnaire. Hence, content analysis has the advantage of both ease and high reliability, but it may be more limited in terms of content validity to the extent that the applications reflect the underlying stated merger decision rationale. These four paths are related to creating economies of scales, expanding geographically, increasing the combined capital base (size) and product offerings, and gaining market power. This appears that decreasing costs than increasing gross revenue drives much of the merger activity at a higher. Many of the applications stated that the reduction of costs as a reason for the merger. In addition to it, many of the applications went further than a general statement of cost reduction explaining that the combined institution would create economies of scales which would result in a reduction in costs as justification for their merger/acquisition request. 3.2 Elements of the Model -Location -Product -Competitors -Market Trends However, since the merger/acquisitions within the banking industry should provide certain data (i.e. Community Reinvestment Act compliance or Herfindahl Indexes) to reinforce the merger/acquisition stated rationale, there is more validity in the stated rationale for mergers/acquisitions of this industry than in others using this approach (Cornett De, 1991). The use of the widely accepted Porter strategic model provides an appropriate framework for both inductive and deductive conclusions. 3.3. Previous Research Findings The model provides a tight linkage to the strategy literature for validity of the coding categories. More than that, the use of multiple coders and a referee insure a high degree of reliability in coding effort. For each application, two coders independently code each paragraph and the results are entered into a spreadsheet for data management purposes. The results of the two coders were then compared, and, if there was any disagreement, the referee discussed the differences with the other coders and made a final determination. For each application, a resultant tabulation was created and overlaid upon the fishbone for visual inspection. Hence, this model contains the total numerical count of the entire sample. 3.4. Critics for the Previous Research Previous literature finds an empirical evidence of links between mergers and financial performance, measured in terms of either profitability or operating efficiency (Berger, Demsetz, Strahan, 1999). The US experience cannot be a global paradigm because US banks has dominance in the global financial arena. Prior to the US bank merger wave, the banks that operated with long standing geographic restrictions, could not expand their branch networks when market opportunities arose outside their market areas. Hence, a sustained period of banking distress began in 1981. The thrift industry collapsed; many banks experienced distress in the early 1980s due to credit problems ranging from Latin American loans, loans in oil-rich domestic areas, loans for corporate mergers and commercial real estate. The failing or troubled institutions were often are taken over by expansion-oriented commercial banks; Nations bank grew through astute acquisitions during the period. Government-assisted merg ers accounted for majority of the bank mergers in the United States between 1982 and 1989. This period of distress mergers led to a shift in regulatory philosophy. Until this period, regulators guided by the antitrust law and the Bank Holding Company Acts of 1956 and 1970 placed some restrictions on bank activities and expansion, using the criteria that firms with monopolistic power will exploit it. In this period, many regulatory economists adopted Chicago new learning approach, which shifted the attention from monopoly position to contestability. Regulatory test for market power was weakened, that permitted federal regulators to override product-line and geographic restrictions in approving distress mergers. The Federal Reserve used regulatory flexibility to force modernization in U.S. banking laws. Bank regulators increasingly operated on the premise that the industry is overbanked and financial innovations has made capital and credit universally available. One approach was th e emergence of an upscale retail banking strategy. PREVIOUS RESEARCH METHODOLOGY The Banks using this approach identify a preferred customer base to which they can deliver both traditional banking services-short-term consumer loans, long-term mortgages, depository services-and nontraditional services such as mutual funds, insurance, and investment advice. The second and related approach was a shift away from maturity transformation and interest-based income, towards maturity matching, secondary market sales, and fee-based income. Much of the revenue from upscale households take the form of fees, encouraged by the growth of secondary loan markets and of banks involvement in the household portfolio management. The proportion of interest expenses within banks overall expenses is declined since 1982; noninterest income has been an increasing share of bank income since 1978 (DeYoung, 1994). Large banking firms have led to the second phase of the U.S. bank merger wave because they have most aggressively pursued upscale-retail and fee-based strategies. Since the banks are not more efficient or more profitable than the smaller banks they purchase, earnings increase have not financed these acquisitions, while Wall Street has. Wall Streets analysts have adopted the concept of banking industry excess capacity; and brokers and underwriters have earned the substantial fees from the equity issues that have provided the cash needed to sweeten offers for target banks equity shares (Serwer, 1995) (Chong, 1991). CONCLUSION Although there are many frameworks used for analysis of other industries, they often do not work within the banking industry because of the imposed regulatory constraints; the model reveals that the Porter Model will be suitable in this case for examining the rationale behind the merger/acquisition activity for the banking industry. There are four main paths, for the period examined that explains the reasons behind the mergers/acquisitions activity. Utilizing the synergies between the two partners is a common phrase found throughout the applications. The usual scenario is that the smaller partners will combine with the larger partners in order to develop the economies of scale and also to reduce their combined costs. The remaining three paths are related to increasing gross revenue but at a much lower level on the fishbone framework. Most of the applications justified the merger either directly or indirectly by referencing the combined banks ability to expand geographically i nto various markets that the individual banks had not previously had a market presence. As a result, through the geographical expansion, the bank would be able to decrease the total risk as well as increase the sales of the products and, thus, increase overall gross revenue. Many of the merger/acquisition either directly or indirectly justified their mergers through the fact that the combined asset base (size) would be larger and, thus, allowing the banks to make loans to companies that the individual banks could not have previously serviced due to capital base lending regulatory restrictions. In essence, the larger capital base allowed the merged institutions to offer a new product (jumbo loans) to an existing customer or to gain new customer through the new product offering. In addition, on the same path many of the applications justified the merger through the ability to offer a greater array of products. The smaller partner (usually) would be able to offer products already carried by the larger partner and that previously due to the smaller partners size they had not able to offer. In both cases, the merger would allow the combined institution to offer a greater product array increasing their sales and, thereby, increasing gross revenue. The last path deals with the, often, indirect merger justifications of increasing market power. Through the merger, the merged banks would be better able to compete with banks within their market, increasing their product sales, and, thus, their gross revenue.

Tuesday, May 12, 2020

The Impact of Wheeled Vehicles on Human History

The inventions of the wheel and wheeled vehicles–wagons or carts which are supported and moved around by round wheels–had a profound effect on human economy and society. As a way to efficiently carry goods for long distances, wheeled vehicles allowed for the broadening of trade networks. With access to a wider market, craftspeople could more easily specialize, and communities could expand if there was no need to live close to food production areas. In a very real sense, wheeled vehicles facilitated periodic farmers markets. Not all changes brought by wheeled vehicles were good ones, however: With the wheel, imperialist elites were able to expand their range of control, and wars could be waged farther afield. Key Takeaways: Invention of the Wheel The earliest evidence for wheel use is that of drawings on clay tablets, found nearly simultaneously throughout the Mediterranean region about 3500 BCE.  Parallel innovations dated about the same time as the wheeled vehicle are the domestication of the horse and prepared trackways.  Wheeled vehicles are helpful, but not necessary, for the introduction of extensive trade networks and markets, craft specialists, imperialism, and the growth of settlements in different complex societies.   Parallel Innovations It wasnt simply the invention of wheels alone that created these changes. Wheels are most useful in combination with suitable draft animals such as horses and oxen, as well as prepared roadways. The earliest planked roadway we know of, Plumstead in the United Kingdom, dates to about the same time as the wheel, 5,700 years ago. Cattle were domesticated about 10,000 years ago and horses probably about 5,500 years ago. Wheeled vehicles were in use across Europe by the third millennium BCE, as evidenced by the discovery of clay models of high sided four-wheeled carts throughout the Danube and Hungarian plains, such as that from the site of Szigetszentmarton in Hungary. More than 20 wooden wheels dated to the late and final Neolithic have been discovered in different wetland contexts across central Europe, between about 3300–2800 BCE. Wheels were invented in the Americas, too, but because draft animals were not available, wheeled vehicles were not an American innovation. Trade flourished in the Americas, as did craft specialization, imperialism and wars, road construction, and the expansion of settlements, all without wheeled vehicles: but theres no doubt that having the wheel did drive (pardon the pun) many social and economic changes in Europe and Asia. Earliest Evidence The earliest evidence for wheeled vehicles appears simultaneously in Southwest Asia and Northern Europe, about 3500 BCE. In Mesopotamia, that evidence is from images, pictographs representing four-wheeled wagons found inscribed on clay tablets dated to the late Uruk period of Mesopotamia. Models of solid wheels, carved from limestone or modeled in clay, have been found in Syria and Turkey, at sites dated approximately a century or two later. Although long-standing tradition credits the southern Mesopotamian civilization with the invention of wheeled vehicles, today scholars are less certain, as there appears to be a nearly simultaneous record of use throughout the Mediterranean basin. Scholars are divided as to whether this is the result of the rapid dissemination of a single invention or multiple independent innovations. In technological terms, the earliest wheeled vehicles appear to have been four-wheeled, as determined from models identified at Uruk (Iraq) and Bronocice (Poland). A two-wheeled cart is illustrated at the end of the fourth millennium BCE, at Lohne-Engelshecke, Germany (~3402–2800 cal BCE (calendar years BCE). The earliest wheels were single piece discs, with a cross-section roughly approximating the spindle whorl—that is, thicker in the middle and thinning to the edges. In Switzerland and southwestern Germany, the earliest wheels were fixed to a rotating axle through a square mortise, so that the wheels turned together with the axle. Elsewhere in Europe and the Near East, the axle was fixed and straight, and the wheels turned independently. When wheels turn freely from the axle, a drayman can turn the cart without having to drag the outside wheel. Wheel Ruts and Pictographs The oldest known evidence of wheeled vehicles in Europe comes from the Flintbek site, a Funnel Beaker culture near Kiel, Germany, dated to 3420–3385 cal BCE. A series of parallel cart tracks was identified beneath the northwestern half of the long barrow at Flintbek, measuring just over 65 ft (20 m) long and consisting of two parallel sets of wheel ruts, up to two ft (60 cm) wide. Each single wheel rut was 2–2.5 in (5–6 cm) wide, and the gauge of the wagons has been estimated at 3.5–4 ft (1.1–1.2 m) wide. On the islands of Malta and Gozo, a number of cart ruts have been found which may or may not be associated with the construction of the Neolithic temples there. At Bronocice in Poland, a Funnel Beaker site located 28 mi (45 km) northeast of Krakà ³w, a ceramic vessel (a beaker) was painted with several, repeated images of a schematic of a four-wheel wagon and yoke, as part of the design. The beaker is associated with cattle bone dated to 3631–3380 cal BCE. Other pictographs are known from Switzerland, Germany, and Italy; two wagon pictographs are also known from the Eanna precinct, level 4A at Uruk, dated to 2815/-85 BCE (4765/-85 BP [5520 cal BP]), a third is from Tell Uqair: both these sites are in what is today Iraq. Reliable dates indicate that two- and four-wheeled vehicles were known from the mid-fourth millennium BCE throughout most of Europe. Single wheels made of wood have been identified from Denmark and Slovenia. Models of Wheeled Wagons While miniature models of wagons are useful to the archaeologist, because they are explicit, information-bearing artifacts, they must also have had some specific meaning and significance in the various regions where they were used. Models are known from Mesopotamia, Greece, Italy, the Carpathian basin, the Pontic region in Greece, India, and China. Complete life-sized vehicles are also known from Holland, Germany, and Switzerland, occasionally used as funeral objects. A wheel model carved out of chalk was recovered from the late Uruk site of Jebel Aruda in Syria. This asymmetrical disk measures 3 in (8 cm) in diameter and 1 in (3 cm) thick, and wheel  as hubs on both sides. A second wheel model was discovered at the Arslantepe site in Turkey. This disc made of clay measured 3 in (7.5 cm) in diameter and has a central hole where presumably the axle would have gone. This site also includes local wheel-thrown imitations of the simplified form of late Uruk pottery. One recently reported miniature model comes from the site of Nemesnà ¡dudvar, an early Bronze Age through Late Medieval site located near the town of Nemesnà ¡dudvar, County Bà ¡cs-Kiskun, Hungary. The model was discovered along with various pottery fragments and animal bones in a part of the settlement dated to the early Bronze Age. The model is 10.4 in (26.3 cm) long, 5.8 in (14.9 cm) wide, and has a height of 2.5 in (8.8 cm). Wheels and axles for the model were not recovered, but the round feet were perforated as if they had existed at one time. The model is made out of clay tempered with crushed ceramics and fired to brownish gray color. The bed of the wagon is rectangular, with straight-sided short ends, and curved edges on the long side. The feet are cylindrical; the entire piece is decorated in zoned, parallel chevrons and oblique lines. Ulan IV, Burial 15, Kurgan 4 In 2014, archaeologist Natalia Shishlina and colleagues reported the recovery of a dismantled four-wheeled full-sized wagon, direct-dated to between 2398–2141 cal BCE. This Early Bronze Age Steppe Society (specifically East Manych Catacomb culture) site in Russia contained the interment of an elderly man, whose grave goods also included a bronze knife and rod, and a turnip-shaped pot. The rectangular wagon frame measured 5.4x2.3 ft (1.65x0.7 m) and the wheels, supported by horizontal axles, were 1.6 ft (.48 m) in diameter. Side panels were constructed of horizontally placed planks; and the interior was probably covered with reed, felt, or woolen mat. Curiously, the different parts of the wagon were made of a variety of wood, including elm, ash, maple, and oak. Sources Bakker, Jan Albert, et al. The Earliest Evidence of Wheeled Vehicles in Europe and the near East. Antiquity 73.282 (1999): 778–90. Print.Bondà ¡r, Mà ¡ria, and Gyà ¶rgy V. Szà ©kely. A New Early Bronze Age Wagon Model from the Carpathian Basin. World Archaeology 43.4 (2011): 538–53. Print.Bulliet, Richard W. The Wheel—Inventions Reinventions. New York: Columbia University Press, 2016. Print.Klimscha, Florian. Cultural Diversity in Prehistoric Western Eurasia: How Were Innovations Diffused and Re-Invented in Ancient Times? Claroscuro 16.16 (2018): 1-30. Print.Mischka, Doris. The Neolithic Burial Sequence at Flintbek La 3, North Germany, and Its Cart Tracks: A Precise Chronology. Antiquity 85.329 (2011): 742–58. Print.Sax, Margaret, Nigel D. Meeks, and Dominique Collon. The Introduction of the Lapidary Engraving Wheel in Mesopotamia. Antiquity 74.284 (2015): 380–87. Print.Schier, Wolfram. Central and Eastern Europe. The Oxford Handbook of Neolith ic Europe. Eds. Fowler, Chris, Jan Harding and Daniela Hofmann. Oxford: Oxford University Press, 2014. Print.Shishlina, N.I., D. S. Kovalev, and E. R. Ibragimova. Catacomb Culture Wagons of the Eurasian Steppes. Antiquity 88.340 (2014): 378–94. Print.Vandkilde, Helle. Breakthrough of the Nordic Bronze Age: Transcultural Warriorhood and a Carpathian Crossroad in the Sixteenth Century BC. European Journal of Archaeology 17.4 (2014): 602–33. Print.

Wednesday, May 6, 2020

Anomalies in Option Pricing Free Essays

string(185) " shrewd investors will go short the first and long the second; that is, they will write calls and sell bonds \(borrow\), while simultaneously buying both puts and the underlying stock\." Anomalies in option pricing: the Black-Scholes model revisited New England Economic Review, March-April, 1996 by Peter Fortune This study is the third in a series of Federal Reserve Bank of Boston studies contributing to a broader understanding of derivative securities. The first (Fortune 1995) presented the rudiments of option pricing theory and addressed the equivalence between exchange-traded options and portfolios of underlying securities, making the point that plain vanilla options – and many other derivative securities – are really repackages of old instruments, not novel in themselves. That paper used the concept of portfolio insurance as an example of this equivalence. We will write a custom essay sample on Anomalies in Option Pricing or any similar topic only for you Order Now The second (Minehan and Simons 1995) summarized the presentations at â€Å"Managing Risk in the ’90s: What Should You Be Asking about Derivatives? â€Å", an educational forum sponsored by the Boston Fed. Related Results Trust, E-innovation and Leadership in Change Foreign Banks in United States Since World War II: A Useful Fringe Building Your Brand With Brand Line Extensions The Impact of the Structure of Debt on Target Gains Project Management Standard Program. The present paper addresses the question of how well the best-known option pricing model – the Black-Scholes model – works. A full evaluation of the many option pricing models developed since their seminal paper in 1973 is beyond the scope of this paper. Rather, the goal is to acquaint a general audience with the key characteristics of a model that is still widely used, and to indicate the opportunities for improvement which might emerge from current research and which are undoubtedly the basis for the considerable current research on derivative securities. The hope is that this study will be useful to students of financial markets as well as to financial market practitioners, and that it will stimulate them to look into the more recent literature on the subject. The paper is organized as follows. The next section briefly reviews the key features of the Black-Scholes model, identifying some of its most prominent assumptions and laying a foundation for the remainder of the paper. The second section employs recent data on almost one-half million options transactions to evaluate the Black-Scholes model. The third section discusses some of the reasons why the Black-Scholes odel falls short and assesses some recent research designed to improve our ability to explain option prices. The paper ends with a brief summary. Those readers unfamiliar with the basics of stock options might refer to Fortune (1995). Box 1 reviews briefly the fundamental language of options and explains the notation used in the paper. I. The Black-Scholes Model In 1973, Myron Scholes and the late Fischer Black published their seminal paper on option pricing (Black and Scholes 1973). The Black-Scholes model revolutionized financial economics in several ways. First, it contributed to our understanding of a wide range of contracts with option-like features. For example, the call feature in corporate and municipal bonds is clearly an option, as is the refinancing privilege in mortgages. Second, it allowed us to revise our understanding of traditional financial instruments. For example, because shareholders can turn the company over to creditors if it has negative net worth, corporate debt can be viewed as a put option bought by the shareholders from creditors. The Black-Scholes model explains the prices on European options, which cannot be exercised before the expiration date. Box 2 summarizes the Black-Scholes model for pricing a European call option on which dividends are paid continuously at a constant rate. A crucial feature of the model is that the call option is equivalent to a portfolio constructed from the underlying stock and bonds. The â€Å"option-replicating portfolio† consists of a fractional share of the stock combined with borrowing a specific amount at the riskless rate of interest. This equivalence, developed more fully in Fortune (1995), creates price relationships which are maintained by the arbitrage of informed traders. The Black-Scholes option pricing model is derived by identifying an option-replicating portfolio, then equating the option’s premium with the value of that portfolio. An essential assumption of this pricing model is that investors arbitrage away any profits created by gaps in asset pricing. For example, if the call is trading â€Å"rich,† investors will write calls and buy the replicating portfolio, thereby forcing the prices back into line. If the option is trading low, traders will buy the option and short the option-replicating portfolio (that is, sell stocks and buy bonds in the correct proportions). By doing so, traders take advantage of riskless opportunities to make profits, and in so doing they force option, stock, and bond prices to conform to an equilibrium relationship. Arbitrage allows European puts to be priced using put-call parity. Consider purchasing one call that expires at time T and lending the present value of the strike price at the riskless rate of interest. The cost is [C. sub. t] + X[e. sup. -r(T-t)]. (See Box 1 for notation: C is the call premium, X is the call’s strike price, r is the riskless interest rate, T is the call’s expiration date, and t is the current date. At the option’s expiration the position is worth the highest of the stock price ([S. sub. T]) or the strike price, a value denoted as max([S. sub. T], X). Now consider another investment, purchasing one put with the same strike price as the call, plus buying the fraction [e. sup. -q(T-t)] of one share of the stock. Denoting the put premium by P and the stock price by S, then the cost of this is [P. sub. t] + [e. sup. -q(T-t)][S. sub. t], and, at time T, the value at this position is also max([S. sub. T], X). (1) Because both positions have the same terminal value, arbitrage will force them to have the same initial value. Suppose that [C. sub. t] + X[e. sup. -r(T-t)] [greater than] [P. sub. t] + [e. sup. -q(T-t)][S. sub. t], for example. In this case, the cost of the first position exceeds the cost of the second, but both must be worth the same at the option’s expiration. The first position is overpriced relative to the second, and shrewd investors will go short the first and long the second; that is, they will write calls and sell bonds (borrow), while simultaneously buying both puts and the underlying stock. You read "Anomalies in Option Pricing" in category "Papers" The result will be that, in equilibrium, equality will prevail and [C. sub. t] + X[e. sup. r(T-t)] = [P. sub. t] + [e. sup. -q(T-t)][S. sub. t]. Thus, arbitrage will force a parity between premiums of put and call options. Using this put-call parity, it can be shown that the premium for a European put option paying a continuous dividend at q percent of the stock price is: [P. sub. t] = -[e. sup. -q(T-t)][S. sub. t]N(-[d. sub. 1]) + X[e. sup. -r(T-t)]N(-[d. sub. 2]) where [d. sub. 1] and [d. sub. 2] are defined as in Box 2. The importance of arbitrage in the pricing of options is clear. However, many option pricing models can be derived from the assumption of complete arbitrage. Each would differ according to the probability distribution of the price of the underlying asset. What makes the Black-Scholes model unique is that it assumes that stock prices are log-normally distributed, that is, that the logarithm of the stock price is normally distributed. This is often expressed in a â€Å"diffusion model† (see Box 2) in which the (instantaneous) rate of change in the stock price is the sum of two parts, a â€Å"drift,† defined as the difference between the expected rate of change in the stock price and the dividend yield, and â€Å"noise,† defined as a random variable with zero mean and constant variance. The variance of the noise is called the â€Å"volatility† of the stock’s rate of price change. Thus, the rate of change in a stock price vibrates randomly around its expected value in a fashion sometimes called â€Å"white noise. † The Black-Scholes models of put and call option pricing apply directly to European options as long as a continuous dividend is paid at a constant rate. If no dividends are paid, the models also apply to American call options, which can be exercised at any time. In this case, it can be shown that there is no incentive for early exercise, hence the American call option must trade like its European counterpart. However, the Black-Scholes model does not hold for American put options, because these might be exercised early, nor does it apply to any American option (put or call) when a dividend is paid. (2) Our empirical analysis will sidestep those problems by focusing on European-style options, which cannot be exercised early. A call option’s intrinsic value is defined as max(S – X,0), that is, the largest of S – X or zero; a put option’s intrinsic value is max(X – S,0). When the stock price (S) exceeds a call option’s strike price (X), or falls short of a put option’s strike price, the option has a positive intrinsic value because if it could be immediately exercised, the holder would receive a gain of S – X for a call, or X – S for a put. However, if S [less than] X, the holder of a call will not exercise the option and it has no intrinsic value; if X [greater than] S this will be true for a put. The intrinsic value of a call is the kinked line in Figure 1 (a put’s intrinsic value, not shown, would have the opposite kink). When the stock price exceeds the strike price, the call option is said to be in-the-money. It is out-of-the-money when the stock price is below the strike price. Thus, the kinked line, or intrinsic value, is the income from immediately exercising the option: When the option is out-of-the-money, its intrinsic value is zero, and when it is in the money, the intrinsic value is the amount by which S exceeds X. Convexity, the Call Premium, and the Greek Chorus The premium, or price paid for the option, is shown by the curved line in Figure 1. This curvature, or â€Å"convexity,† is a key characteristic of the premium on a call option. Figure 1 shows the relationship between a call option’s premium and the underlying stock price for a hypothetical option having a 60-day term, a strike price of $50, and a volatility of 20 percent. A 5 percent riskless interest rate is assumed. The call premium has an upward-sloping relationship with the stock price, and the slope rises as the stock price rises. This means that the sensitivity of the call premium to changes in the stock price is not constant and that the option-replicating portfolio changes with the stock price. The convexity of option premiums gives rise to a number of technical concepts which describe the response of the premium to changes in the variables and parameters of the model. For example, the relationship between the premium and the stock price is captured by the option’s Delta ([Delta]) and its Gamma ([Gamma]). Defined as the slope of the premium at each stock price, the Delta tells the trader how sensitive the option price is to a change in the stock price. (3) It also tells the trader the value of the hedging ratio. (4) For each share of stock held, a perfect hedge requires writing 1/[[Delta]. ub. c] call options or buying 1/[[Delta]. sub. p] puts. Figure 2 shows the Delta for our hypothetical call option as a function of the stock price. As S increases, the value of Delta rises until it reaches its maximum at a stock price of about $60, or $10 in-the-money. After that point, the option premium and the stock price have a 1:1 relationship. The increasing Delta also means that the hedging ratio falls as the stock price rises. At higher stock prices, fewer call options need to be written to insulate the investor from changes in the stock price. The Gamma is the change in the Delta when the stock price changes. (5) Gamma is positive for calls and negative for puts. The Gamma tells the trader how much the hedging ratio changes if the stock price changes. If Gamma is zero, Delta would be independent of S and changes in S would not require adjustment of the number of calls required to hedge against further changes in S. The greater is Gamma, the more â€Å"out-of-line† a hedge becomes when the stock price changes, and the more frequently the trader must adjust the hedge. Figure 2 shows the value of Gamma as a function of the amount by which our hypothetical call option is in-the-money. (6) Gamma is almost zero for deep-in-the-money and deep-out-of-the-money options, but it reaches a peak for near-the-money options. In short, traders holding near-the-money options will have to adjust their hedges frequently and sizably as the stock price vibrates. If traders want to go on long vacations without changing their hedges, they should focus on far-away-from-the-money options, which have near-zero Gammas. A third member of the Greek chorus is the option’s Lambda, denoted by [Lambda], also called Vega. (7) Vega measures the sensitivity of the call premium to changes in volatility. The Vega is the same for calls and puts having the same strike price and expiration date. As Figure 2 shows, a call option’s Vega conforms closely to the pattern of its Gamma, peaking for near-the-money options and falling to zero for deep-out or deep-in options. Thus, near-the-money options appear to be most sensitive to changes in volatility. Because an option’s premium is directly related to its volatility – the higher the volatility, the greater the chance of it being deep-in-the-money at expiration – any propositions about an option’s price can be translated into statements about the option’s volatility, and vice versa. For example, other things equal, a high volatility is synonymous with a high option premium for both puts and calls. Thus, in many contexts we can use volatility and premium interchangeably. We will use this result below when we address an option’s implied volatility. Other Greeks are present in the Black-Scholes pantheon, though they are lesser gods. The option’s Rho ([Rho]) is the sensitivity of the call premium to changes in the riskless interest rate. (8) Rho is always positive for a call (negative for a put) because a rise in the interest rate reduces the present value of the strike price paid (or received) at expiration if the option is exercised. The option’s Theta ([Theta]) measures the change in the premium as the term shortens by one time unit. (9) Theta is always negative because an option is less valuable the shorter the time remaining. The Black-Scholes Assumptions The assumptions underlying the Black-Scholes model are few, but strong. They are: * Arbitrage: Traders can, and will, eliminate any arbitrage profits by simultaneously buying (or writing) options and writing (or buying) the option-replicating portfolio whenever profitable opportunities appear. * Continuous Trading: Trading in both the option and the underlying security is continuous in time, that is, transactions can occur simultaneously in related markets at any instant. * Leverage: Traders can borrow or lend in unlimited amounts at the riskless rate of interest. Homogeneity: Traders agree on the values of the relevant parameters, for example, on the riskless rate of interest and on the volatility of the returns on the underlying security. * Distribution: The price of the underlying security is log-normally distributed with statistically independent price changes, and with constant mean and constant variance. * Continuous Prices: No discontinuous jumps occur in the price of the underlying security. * Transactions Costs: The cost of engaging in arbitrage is negligibly small. The arbitrage assumption, a fundamental proposition in economics, has been discussed above. The continuous trading assumption ensures that at all times traders can establish hedges by simultaneously trading in options and in the underlying portfolio. This is important because the Black-Scholes model derives its power from the assumption that at any instant, arbitrage will force an option’s premium to be equal to the value of the replicating portfolio. This cannot be done if trading occurs in one market while trading in related markets is barred or delayed. For example, during a halt in trading of the underlying security one would not expect option premiums to conform to the Black-Scholes model. This would also be true if the underlying security were inactively traded, so that the trader had â€Å"stale† information on its price when contemplating an options transaction. The leverage assumption allows the riskless interest rate to be used in options pricing without reference to a trader’s financial position, that is, to whether and how much he is borrowing or lending. Clearly this is an assumption adopted for convenience and is not strictly true. However, it is not clear how one would proceed if the rate on loans was related to traders’ financial choices. This assumption is common to finance theory: For example, it is one of the assumptions of the Capital Asset Pricing Model. Furthermore, while private traders have credit risk, important players in the option markets, such as nonfinancial corporations and major financial institutions, have very low credit risk over the lifetime of most options (a year or less), suggesting that departures from this assumption might not be very important. The homogeneity assumption, that traders share the same probability beliefs and opportunities, flies in the face of common sense. Clearly, traders differ in their judgments of such important things as the volatility of an asset’s future returns, and they also differ in their time horizons, some thinking in hours, others in days, and still others in weeks, months, or years. Indeed, much of the actual trading that occurs must be due to differences in these judgments, for otherwise there would be no disagreements with â€Å"the market† and financial markets would be pretty dull and uninteresting. The distribution assumption is that stock prices are generated by a specific statistical process, called a diffusion process, which leads to a normal distribution of the logarithm of the stock’s price. Furthermore, the continuous price assumption means that any changes in prices that are observed reflect only different draws from the same underlying log-normal distribution, not a change in the underlying probability distribution itself. II. Tests of the Black-Scholes Model. Assessments of a model’s validity can be done in two ways. First, the model’s predictions can be confronted with historical data to determine whether the predictions are accurate, at least within some statistical standard of confidence. Second, the assumptions made in developing the model can be assessed to determine if they are consistent with observed behavior or historical data. A long tradition in economics focuses on the first type of tests, arguing that â€Å"the proof is in the pudding. It is argued that any theory requires assumptions that might be judged â€Å"unrealistic,† and that if we focus on the assumptions, we can end up with no foundations for deriving the generalizations that make theories useful. The only proper test of a theory lies in its predictive ability: The theory that consistently predicts best is the best theory, regardless of the assumptions required to generate the theory. Tests based on assumptions are justified by the principle of â€Å"garbage in-garbage out. † This approach argues that no theory derived from invalid assumptions can be valid. Even if it appears to have predictive abilities, those can slip away quickly when changes in the eThe Data The data used in this study are from the Chicago Board Options Exchange’s Market Data Retrieval System. The MDR reports the number of contracts traded, the time of the transaction, the premium paid, the characteristics of the option (put or call, expiration date, strike price), and the price of the underlying stock at its last trade. This information is available for each option listed on the CBOE, providing as close to a real-time record of transactions as can be found. While our analysis uses only records of actual transactions, the MDR also reports the same information for every request of a quote. Quote records differ from the transaction records only in that they show both the bid and asked premiums and have a zero number of contracts traded. nvironment make the invalid assumptions more pivotal. The data used are for the 1992-94 period. We selected the MDR data for the SP 500-stock index (SPX) for several reasons. First, the SPX options contract is the only European-style stock index option traded on the CBOE. All options on individual stocks and on other indices (for example, the SP 100 index, the Major Market Index, the NASDAQ 100 index) are American options for which the Black-Scholes model would not apply. The ability to focus on a European-style option has several advantages. By allowing us to ignore the potential influence of early exercise, a possibility that significantly affects the premiums on American options on dividend-paying stocks as well as the premiums on deep-in-the-money American put options, we can focus on options for which the Black-Scholes model was designed. In addition, our interest is not in individual stocks and their options, but in the predictive power of the Black-Scholes option pricing model. Thus, an index option allows us to make broader generalizations about model performance than would a select set of equity options. Finally, the SP 500 index options trade in a very active market, while options on many individual stocks and on some other indices are thinly traded. The full MDR data set for the SPX over the roughly 758 trading days in the 1992-94 period consisted of more than 100 million records. In order to bring this down to a manageable size, we eliminated all records that were requests for quotes, selecting only records reflecting actual transactions. Some of these transaction records were cancellations of previous trades, for example, trades made in error. If a trade was canceled, we included the records of the original transaction because they represented market conditions at the time of the trade, and because there is no way to determine precisely which transaction was being canceled. We eliminated cancellations because they record the SP 500 at the time of the cancellation, not the time of the original trade. Thus, cancellation records will contain stale prices. This screening created a data set with over 726,000 records. In order to complete the data required for each transaction, the bond-equivalent yield (average of bid and asked prices) on the Treasury bill with maturity closest to the expiration date of the option was used as a riskless interest rate. These data were available for 180-day terms or less, so we excluded options with a term longer than 180 days, leaving over 486,000 usable records having both CBOE and Treasury bill data. For each of these, we assigned a dividend yield based on the SP 500 dividend yield in the month of the option trade. Because each record shows the actual SP 500 at almost the same time as the option transaction, the MDR provides an excellent basis for estimating the theoretically correct option premium and evaluating its relationship to actual option premiums. There are, however, some minor problems with interpreting the MDR data as providing a trader’s-eye view of option pricing. The transaction data are not entered into the CBOE computer at the exact moment of the trade. Instead, a ticket is filled out and then entered into the computer, and it is only at that time that the actual level of the SP 500 is recorded. In short, the SP 500 entries necessarily lag behind the option premium entries, so if the SP 500 is rising (falling) rapidly, the reported value of the SPX will be above (below) the true value known to traders at the time of the transaction Test 1: An Implied Volatility Test A key variable in the Black-Scholes model is the volatility of returns on the underlying asset, the SPX in our case. Investors are assumed to know the true standard deviation of the rate of return over the term of the option, and this information is embedded in the option premium. While the true volatility is an unobservable variable, the market’s estimate of it can be inferred from option premiums. The Black-Scholes model assumes that this â€Å"implied volatility† is an optimal forecast of the volatility in SPX returns observed over the term of the option. The calculation of an option’s implied volatility is reasonably straightforward. Six variables are needed to compute the predicted premium on a call or put option using the Black-Scholes model. Five of these can be objectively measured within reasonable tolerance levels: the stock price (S), the strike price (X), the remaining life of the option (T – t), the riskless rate of interest over the remaining life of the option (r), typically measured by the rate of interest on U. S. Treasury securities that mature on the option’s expiration date, and the dividend yield (q). The sixth variable, the â€Å"volatility† of the return on the stock price, denoted by [Sigma], is unobservable and must be estimated using numerical methods. Using reasonable values of all the known variables, the implied volatility of an option can be computed as the value of [Sigma] that makes the predicted Black-Scholes premium exactly equal to the actual premium. An example of the computation of the implied volatility on an option is shown in Box 3. The Black-Scholes model assumes that investors know the volatility of the rate of return on the underlying asset, and that this volatility is measured by the (population) standard deviation. If so, an option’s implied volatility should differ from the true volatility only because of random events. While these discrepancies might occur, they should be very short-lived and random: Informed investors will observe the discrepancy and engage in arbitrage, which quickly returns things to their normal relationships. Figure 3 reports two measures of the volatility in the rate of return on the SP 500 index for each trading day in the 1992-94 period. (10) The â€Å"actual† volatility is the ex post standard deviation of the daily change in the logarithm of the SP 500 over a 60-day horizon, converted to a percentage at an annual rate. For example, for January 5, 1993 the standard deviation of the daily change in lnSP500 was computed for the next 60 calendar days; this became the actual volatility for that day. Note that the actual volatility is the realization of one outcome from the entire probability distribution of the standard deviation of the rate of return. While no single realization will be equal to the â€Å"true† volatility, the actual volatility should equal the true volatility, â€Å"on average. † The second measure of volatility is the implied volatility. This was constructed as follows, using the data described above. For each trading day, the implied volatility on call options meeting two criteria was computed. The criteria were that the option had 45 to 75 calendar days to expiration (the average was 61 days) and that it be near the money (defined as a spread between SP 500 and strike price no more than 2. 5 percent of the SP 500). The first criterion was adopted to match the term of the implied volatility with the 60-day term of the actual volatility. The second criterion was chosen because, as we shall see later, near-the-money options are most likely to conform to Black-Scholes predictions. The Black-Scholes model assumes that an option’s implied volatility is an optimal forecast of the volatility in SPX returns observed over the term of the option. Figure 3 does not provide visual support for the idea that implied volatilities deviate randomly from actual volatility, a characteristic of optimal forecasting. While the two volatility measures appear to have roughly the same average, extended periods of significant differences are seen. For example, in the last half of 1992 the implied volatility remained well above the actual volatility, and after the two came together in the first half of 1993, they once again diverged for an extended period. It is clear from this visual record that implied volatility does not track actual volatility well. However, this does not mean that implied volatility provides an inferior forecast of actual volatility: It could be that implied volatility satisfies all the scientific requirements of a good forecast in the sense that no other forecasts of actual volatility are better. In order to pursue the question of the informational content of implied volatility, several simple tests of the hypothesis that implied volatility is an optimal forecast of actual volatility can be applied. One characteristic of an optimal forecast is that the forecast should be unbiased, that is, the forecast error (actual volatility less implied volatility) should have a zero mean. The average forecast error for the data shown in Figure 3 is -0. 7283, with a t-statistic of -8. 22. This indicates that implied volatility is a biased forecast of actual volatility. A second characteristic of an optimal forecast is that the forecast error should not depend on any information available at the time the forecast is made. If information were available that would improve the forecast, the forecaster should have already included it in making his forecast. Any remaining forecasting errors should be random and uncorrelated with information available before the day of the forecast. To implement this â€Å"residual information test,† the forecast error was regressed on the lagged values of the SP 500 in the three days prior to the forecast. 11) The F-statistic for the significance of the regression coefficients was 4. 20, with a significance level of 0. 2 percent. This is strong evidence of a statistically significant violation of the residual information test. The conclusion that implied volatility is a poor forecast of actual volatility has been reached in several other studies using different methods and data. For example, Canina and Figlewski ( 1993), using data for the SP 100 in the years 1983 to 1987, found that implied volatility had almost no informational content as a prediction of actual volatility. However, a recent review of the literature on implied volatility (Mayhew 1995) mentions a number of papers that give more support for the forecasting ability of implied volatility. Test 2: The Smile Test One of the predictions of the Black-Scholes model is that at any moment all SPX options that differ only in the strike price (having the same term to expiration) should have the same implied volatility. For example, suppose that at 10:15 a. m. on November 3, transactions occur in several SPX call options that differ only in the strike price. Because each of the options is for the same interval of time, the value of volatility embedded in the option premiums should be the same. This is a natural consequence of the fact that the variability in the SP 500’s return over any future period is independent of the strike price of an SPX option. One approach to testing this is to calculate the implied volatilities on a set of options identical in all respects except the strike price. If the Black-Scholes model is valid, the implied volatilities should all be the same (with some slippage for sampling errors). Thus, if a group of options all have a â€Å"true† volatility of, say, 12 percent, we should find that the implied volatilities differ from the true level only because of random errors. Possible reasons for these errors are temporary deviations of premiums from equilibrium levels, or a lag in the reporting of the trade so that the value of the SPX at the time stamp is not the value at the time of the trade, or that two options might have the same time stamp but one was delayed more than the other in getting into the computer. This means that a graph of the implied volatilities against any economic variable should show a flat line. In particular, no relationship should exist between the implied volatilities and the strike price or, equivalently, the amount by which each option is â€Å"in-the-money. † However, it is widely believed that a â€Å"smile† is present in option prices, that is, options far out of the money or far in the money have higher implied volatilities than near-the-money options. Stated differently, deep-out and far-in options trade â€Å"rich† (overpriced) relative to near-the-money options. If true, this would make a graph of the implied volatilities against the value by which the option is in-the-money look like a smile: high implied volatilities at the extremes and lower volatilities in the middle. In order to test this hypothesis, our MDR data were screened for each day to identify any options that have the same characteristics but different strike [TABULAR DATA FOR TABLE 1 OMITTED] prices. If 10 or more of these â€Å"identical† options were found, the average implied volatility for the group was computed and the deviation of each option’s implied volatility from its group average, the Volatility Spread, was computed. For each of these options, the amount by which it is in-the-money was computed, creating a variable called ITM (an acronym for in-the-money). ITM is the amount by which an option is in-the-money. It is negative when the option is out-of-the-money. ITM is measured relative to the SP 500 index level, so it is expressed as a percentage of the SP 500. The Volatility Spread was then regressed against a fifth-order polynomial equation in ITM. This allows for a variety of shapes of the relationship between the two variables, ranging from a flat line if Black-Scholes is valid (that is, if all coefficients are zero), through a wavy line with four peaks and troughs. The Black-Scholes prediction that each coefficient in the polynomial regression is zero, leading to a flat line, can be tested by the F-statistic for the regression. The results are reported in Table 1, which shows the F-statistic for the hypothesis that all coefficients of the fifth-degree polynomial are jointly zero. Also reported is the proportion of the variation in the Volatility Spreads, which is explained by variations in ITM ([R. sup. 2]). The results strongly reject the Black-Scholes model. The F-statistics are extremely high, indicating virtually no chance that the value of ITM is irrelevant to the explanation of implied volatilities. The values of [R. sup. 2] are also high, indicating that ITM explains about 40 to 60 percent of the variation in the Volatility Spread. Figure 4 shows, for call options only, the pattern of the relationship between the Volatility Spread and the amount by which an option is in-the-money. The vertical axis, labeled Volatility Spread, is the deviation of the implied volatility predicted by the polynomial regression from the group mean of implied volatilities for all options trading on the same day with the same expiration date. For each year the pattern is shown throughout that year’s range of values for ITM. While the pattern for each year looks more like Charlie Brown’s smile than the standard smile, it is clear that there is a smile in the implied volatilities: Options that are further in or out of the money appear to carry higher volatilities than slightly out-of-the-money options. The pattern for extreme values of ITM is more mixed. Test 3: A Put-Call Parity Test Another prediction of the Black-Scholes model is that put options and call options identical in all other respects should have the same implied volatilities and should trade at the same premium. This is a consequence of the arbitrage that enforces put-call parity. Recall that put-call parity implies [P. sub. t] + [e. sup. -q(T – t)][S. sub. t] = [C. sub. t] + [Xe. sup. -r(T – t)]. A put and a call, having identical strike prices and terms, should have equal premiums if they are just at-the-money in a present value sense. If, as this paper does, we interpret at-the-money in current dollars rather than present value (that is, as S = X rather than S = [Xe. sup. -r(t – q)(T – t)]), at-the-money puts should have a premium slightly below calls. Because an option’s premium is a direct function of its volatility, the requirement that put premiums be no greater than call premiums for equivalent at-the-money options implies that implied volatilities for puts be no greater than for calls. For each trading day in the 1992-94 period, the difference between implied volatilities for at-the-money puts and calls having the same expiration dates was computed, using the [+ or -]2. 5 percent criterion used above. (12) Figure 5 shows this difference. While puts sometimes have implied volatility less than calls, the norm is for higher implied volatilities for puts. Thus, puts tend to trade â€Å"richer† than equivalent calls, and the Black-Scholes model does not pass this put-call parity test. How to cite Anomalies in Option Pricing, Papers

Saturday, May 2, 2020

Capital Gains or Losses Tax

Question: Discuss about the Capital Gains or Losses Tax. Answer: There are certain sales made by Dave in the financial year 2015 and the task at hand is to calculate the capital gains or losses arising from it. He received 850,000 from the sale of his two storeyed house in St. Lucia which was purchased for 70,000. This means that he made a gain of 780,000 in this sale if we dont account for the expenses incurred in the commission expenses. However, the proceeds received by Dave from sale of his residence at St Lucia is fully exempted from capital gains tax because as per ATO proceeds from a sale of personal home are fully exempted if the individual has been living it for the duration for which he/she has owned the property and the property hasnt been generating any assessable income. The proceeds received from the forfeiture by a buyer is also application for exemption under capital gains as it as part the capital proceeds from the disposal of that asset which was exempted due it being a main residence of the taxpayer. Hence even the forfeited amo unt wont attract any capital gains tax liability. The painting is not eligible for exemption as neither was it purchased under $500 nor was it acquired before 16th September,1995 hence the net capital gains are taxable. So the net gains of 110,000 are to be taxed under capital gains tax. The figure of 110,000 dollars is the selling price less the acquiring price which are 15,000 and 125,000 respectively. The taxable amount arisen due to the benefit provided can be calculated using either of the two methods prescribed by the ATO i.e. indexing method or the discount method whichever yields the lowest value subject to the constraint that it satisfies the conditions laid down to use a particular method for discounting. Since the asset has been held for more than a year and since the asset was acquired before September,1999 he can apply the indexation method to calculate the discount. The indexation factor is given by CPI (Consumer Price Index) in which the sale was made divided by the CPI for the quarter in which the initial investment was made. CPI values were obtained from the website of ATO and the indexation factor calculated to the fourth decimal point is obtained as 2.6952. Since the sale was made in 3rd quarter 1985 the indexation factor for this was 39.7 and that for the sale was 2nd quarter 2015 for which the indexation factor was 107.Hence the cost of the painting would be increased by that factor to get the capital gain. Hence now capital gain would be 84,572 as the cost for calculating capital gain would be 40,428 dollars which is 15,000 multiplied by the indexing factor. But the discount factor gives a better result as under the discount method the capital gains are discounted by 50% hence under this method capital gains would be 55,000 dollars which is the net gains of 110,000 discounted by 50%. Since the discount method gives the better result which in this case is the lower value we would use this method to calculate the capital gain at 55,000 dollars. The capital gain can be reduced only after all the capital losses for the income year have been applied. It is imperative to mention here that net losses from collectables can only be deducted from capital gains made from collectables, not from other capital gains. The capital loss on the boat boat which was purchased in 2004 would be calculated using the other method which would give the highest possible result of 50,000 dollars. The amount is obtained by subtracting the sale price of 60,000 dollars from the acquisition cost of 110,000 dollars. For the capital gain tax on the shares, the other method will be used. The cost base would include the cost price of shares and also the brokerage paid on the shares and the stamp duty. Hence total cost would be 71,000 dollars and since the shares were sold for 80,000 dollars the total capital gain is 9000 dollars. Its explicitly mentioned that the interest charges are not to be included in the cost base. The net capital gain or loss is given by total capital gains for the year less the capital losses for the year further brought down by any discounts allowed. Hence, the net capital gain from sale of painting and shares is 64,000 dollars or 93,372 dollars depending on the method used for the painting. Net capital loss from sale of boat is 50,000 dollars Hence net capital gain of 14,000 dollars or 43,372 for Dave for the current year. Since Dave has had a net capital gain of 14,000 dollars he can use this to deduct the net capital loss carried forward from the previous year which amounts to 50,000 dollars. His net capital loss would now stand at 36,000 dollars as the capital gain this year would be deducted from the carried forward capital gain loss. If Dave has a net capital loss it would be added to the capital gain loss carried forward from last year. Hence now his total capital loss would stand at 50,000 dollars and the additional capital loss incurred this year Capital loss cannot be used to offset the tax liability and would be carried forward and can be used to deduct it against capital gains in the coming years. To calculate a car fringe benefit, an employer must work out the taxable value of the benefit using either of the below mentioned methods as per ATO. The statutory formula method which uses the price spent in acquiring the car for evaluation purposes the operating cost method which takes stock of the operating use of the car to arrive at the tax liability Even if a different method was used in the previous taxable year, the method to be used this year would be determined by the method which gives the lowest value. However, if the required documentation for the operating cost method (for example, log books) have not been kept then the statutory formula method must be used. The operating cost method requires the company to maintain a log book which specifics the usage of car in terms of business and non-business use. Since this has not been maintained the statutory method of valuation would be used to evaluate the taxable value of the fringe benefit arising from the use of the car. Under the statutory formula method, the steps involved are estimating the cost of the car, estimating the statutory rate and determining the number of days the car was used for private purpose. The taxable value is then given by A*B*C/365 where A is the base value of the car, B is the statutory rate and C is the number of days in use of the car in a given assessment year. The base value of a car is: the cost price of the car which includes charges incurred on registration and duty expenses incurred on standard accessories expenses incurred on delivery The cost price of the car includes the GST. The statutory rate for calculating the fringe benefits tax would be 20% since ATO prescribes a flat tax rate of 20% for calculating the benefit provided if kilometres travelled are less than 15,000 kilometres. In fact, for any benefits provided after 2011 the tax structure is a flat rate of 20%. The car held by Emma for a 11 month period which constituted 336 days. During the 11-month period or the 336-day period from 1st may to 31st March no days would be deducted in determining the number of days of usage of car as ATO clearly lists that annual maintenances are to be listed as days when it is available for use and whenever it is garaged at the employees house would also be not deducted, in this case when Emma was interstate would not be deducted, Keeping the above factors in consideration the tax liability would be calculated at 20% of the cost price of the car which is 33,000 and it would be factored by 336/365. Hence 6075 is the taxable value. A company is said to provide a loan fringe benefit if it extends to its employee a loan and charges no interest or a low rate of interest. Any interest rate lower than the prescribed or the benchmark interest rate qualifies as a loan fringe benefit. Thebenchmark interest ratefor the Fringe Benefit Tax for the assessment year ending March 31,2015 is given as 5.95% by ATO. Hence for the given scenario since the loan is provided by Periwinkle to Emma at 4.5% it qualifies as a loan fringe benefit. The taxable value of a loan fringe benefit is the difference between: the interest that would have arisen on the amount extended as loan during the Fringe Benefit Taxable year had the benchmark rate been applicable, and the interest which the company charges the employee on the reduced rate of interest Since Emma uses the loan for purchase of a holiday home and for lending it to her husband the entire amount is to be taken into consideration. For the given scenario the taxable value of the loan fringe benefit is the difference between the two amounts 29,750 dollars and 22,500 dollars which is 7,250 dollars. While the former is the rate of interest charged by the company to Emma, the latter is the interest to be paid in accordance with the statutory rate in 2015. There is no specific information regarding cheap sale of its own products to its employees and neither is it under any exempt category but since the price Emma paid for it is anyways more than the manufacturing cost we exclude it from our scope of taking out the fringe benefits provided to Emma. Hence the total taxable value of the fringe benefits is the loan fringe befit of 7,250 dollars and the car fringe benefit of 6,075 dollars which is a total of 13,325 dollars. Hence the total fringe tax liability would be 6262.75 dollars as the fringe benefit tax is 47%. Had the 50,000 been used by Emma herself instead of being lent to the husband to buy the shares it would be eligible for deduction. ATO prescribes that the taxable value of a loan fringe benefit may be reduced in accordance with the 'otherwise deductible' rule, subject to the constraint hat the investment is made by the employee himself or herself rather than an associate which was the case in first place. Putting it simply it implies that the taxable value would be reduced to the extent to which interest payable on the loan is, or would be, allowable as an income tax deduction to the employee. We look at an example to understand the implications better. Supposing an employee uses a loan from his/her company wholly to invest in interest bearing financial instruments, then the interest that he would have to pay the company is deductible fully for tax purposes. Hence what the mentioned scenario implied is that under this rule the taxable value of the fringe benefit provided would be ze ro, irrespective of the rate of interest charged by the company on the loan. Therefore, where the otherwise deductible rule applies, the taxable value of a loan fringe benefit is: The interest accrued on the principal during the fringe benefit taxable year at the benchmark rate of interest, less by The interest arising at the lowered rate of interest, less The otherwise deductible amount. Hence for the given scenario the taxable amount under loan fringe benefit would be reduced by the differential interest paid on 50,000 dollars since this would now be deductibe for tax purposes. This would mean that the loan fringe benefit would now be 6525 dollars instead of 7250 dollars and the total taxable amount under fringe benefit tax would be 12,600 dollars. References ATO, 2016. Capital Gains Tax. [Online] Available at: https://www.ato.gov.au/General/Capital-gains-tax/ ATO, 2016. Fringe benefits tax. [Online] Available at: https://www.ato.gov.au/General/Fringe-benefits-tax-(FBT)/