Thursday, December 26, 2019

Crusades Effects on the Middle East

Between 1095 and 1291, Christians from western Europe launched a series of eight major invasions against the Middle East. These attacks, called the Crusades, were aimed at liberating the Holy Land and Jerusalem from Muslim rule. The Crusades were sparked by religious fervor in Europe, by exhortations from various popes, and by the need to rid Europe of excess warriors left over from regional wars. What effect did these attacks, which came from out of the blue from the perspective of Muslims and Jews in the Holy Land, have on the Middle East? Short-Term Effects In an immediate sense, the Crusades had a terrible effect on some of the Muslim and Jewish inhabitants of the Middle East. During the First Crusade, for example, adherents of the two religions joined together to defend the cities of Antioch (1097 CE) and Jerusalem (1099) from European Crusaders who laid siege to them. In both cases, the Christians sacked the cities and massacred the Muslim and Jewish defenders. It must have been horrifying for the people to see armed bands of religious zealots approaching to attack their cities and castles. However, as bloody as the battles could be, on the whole, the people of the Middle East considered the Crusades more of an irritant than an existential threat. A Global Trade Power During the Middle Ages, the Islamic world was a global center of trade, culture, and learning. Arab Muslim traders dominated the rich trade in spices, silk, porcelain, and jewels that flowed into Europe from China, Indonesia, and India. Muslim scholars had preserved and translated the great works of science and medicine from classical Greece and Rome, combined that with insights from the ancient thinkers of India and China, and went on to invent or improve on subjects like algebra and astronomy, and medical innovations such as the hypodermic needle. Europe, on the other hand, was a war-torn region of small, feuding principalities, mired in superstition and illiteracy. One of the primary reasons that Pope Urban II initiated the First Crusade (1096–1099), in fact, was to distract the Christian rulers and nobles of Europe from fighting one another by creating a common enemy for them: the Muslims who controlled the Holy Land. Europes Christians would launch seven additional crusades over the next 200 years, but none was as successful as the First Crusade. One effect of the Crusades was the creation of a new hero for the Islamic world: Saladin, the Kurdish sultan of Syria and Egypt, who in 1187 freed Jerusalem from the Christians but refused to massacre them as the Christians had done to the citys Muslim and Jewish citizens 90 years previously. On the whole, the Crusades had little immediate effect on the Middle East in terms of territorial losses or psychological impact. By the 13th century, people in the region were much more concerned about a new threat: the quickly expanding Mongol Empire, which would bring down the Umayyad Caliphate, sack Baghdad, and push toward Egypt. Had the Mamluks not defeated the Mongols in the Battle of Ayn Jalut (1260), the entire Muslim world might have fallen. Effects on Europe In the centuries that followed, it was actually Europe that was most changed by the Crusades. The Crusaders brought back exotic new spices and fabrics, fueling European demand for products from Asia. They also brought back new ideas—medical knowledge, scientific ideas, and more enlightened attitudes about people of other religious backgrounds. These changes among the nobility and soldiers of the Christian world helped spark the Renaissance and eventually set Europe, the backwater of the Old World, on a course toward global conquest. Long-Term Effects of the Crusades on the Middle East Eventually, it was Europes rebirth and expansion that finally created a Crusader effect in the Middle East. As Europe asserted itself during the 15th through 19th centuries, it forced the Islamic world into a secondary position, sparking envy and reactionary conservatism in some sectors of the formerly more progressive Middle East. Today, the Crusades constitute a major grievance for some people in the Middle East, when they consider relations with Europe and the West. 21st Century Crusade In 2001, President George W. Bush reopened the almost 1,000-year-old wound in the days following the 9/11 attacks. On September 16, 2001, President Bush said, This crusade, this war on terrorism, is going to take a while. The reaction in the Middle East and Europe was sharp and immediate: Commentators in both regions decried Bushs use of that term  and vowed that the terrorist attacks and Americas reaction would not turn into a new clash of civilizations like the medieval Crusades. The U.S. entered Afghanistan about a month after the 9/11 attacks to battle the Taliban and al-Qaeda terrorists, which was followed by years of fighting between U.S. and coalition forces and terror groups and insurgents in Afghanistan and elsewhere. In March 2003, the U.S. and other Western forces invaded Iraq over claims that President Saddam Husseins military was in possession of weapons of mass destruction. Eventually, Hussein was captured (and eventually hanged following a trial), al-Qaeda leader Osama Bin Laden was killed in Pakistan during a U.S. raid, and other terror leaders have been taken into custody or killed. The U.S. maintains a strong presence in the Middle East to this day and, due in part to the civilian casualties that have occurred during the years of fighting, some have compared the situation to an extension of the Crusades. Sources and Further Reading Claster, Jill N. Sacred Violence: The European Crusades to the Middle East, 1095-1396. Toronto: University of Toronto Press, 2009.Kà ¶hler, Michael. Alliances and Treaties between Frankish and Muslim Rulers in the Middle East: Cross-Cultural Diplomacy in the Period of the Crusades. Trans. Holt, Peter M. Leiden: Brill, 2013.  Holt, Peter M. The Age of the Crusades: The Near East from the Eleventh Century to 1517. London: Routledge, 2014.

Wednesday, December 18, 2019

The Population Problem Essay - 2973 Words

The Population Problem Two hundred years ago, Thomas Malthus, in An Essay on the Principle of Population, reached the conclusion that the number of people in the world will increase exponentially, while the ability to feed these people will only increase arithmetically (21). Current evidence shows that this theory may not be far from the truth. For example, between 1950 and 1984, the total amount of grain produced more than doubled, much more than the increase in population in those 34 years. More recently though, these statistics have become reversed. From 1950 to 1984, the amount of grain increased at 3 percent annually. Yet, from 1984 to 1993, grain production had grown at barely 1 percent per year, a decrease in grain production per†¦show more content†¦More people means more waste, more pollution, and more development. With this taken into consideration, it seems that Hardins teachings should no longer fall on deaf ears. When discussing the issue of population, it is important to note that it is one of the most controversial issues facing the world today. Population growth, like many other environmental issues, has two sides. One side will claim that the population explosion is only a myth, while the other side will argue that the population explosion is reality. Because of this, statistics concerning this subject vary widely. But, in order to persuade, it is necessary to take one side or the other. Thus, statistics may be questioned as to their validity, even though the statistics come from credible sources. Lifeboat Ethics The United States is the most populous country in the world, behind only China and India. Unlike China and India though, the United States is the fastest growing industrialized nation. The United States population expands so quickly because of the imbalance between migration and immigration, and births and deaths. For example, in 1992, 4.1 million babies were born. Weighing this statistic against the number of deaths and the number of people who entered and left the country, the result was that the United States obtained 2.8 million more people than it had gotten rid of (Douglis 12). Population increases place great strain on the American society and more particularly itShow MoreRelatedThe Problem Of Population Ageing Population Essay2028 Words   |  9 PagesStatistics, 2016). Population ageing is even more serious in rural areas (Department of Health and Ageing, 2008). This is evident in Little Whinging, a remote village in Queensland, where around 20 per cent of its population is expected to be over 65 by 2030 and the number may rise to 25 per cent by 2050. With the projected growth in the ageing population resulting increase in demand for healthcare and assisted living programmes, the society needs to address the problem of population ageing as a wholeRead MoreThe Problem Of Population Control1366 Words   |  6 PagesPopulation control is an idea that has been thrown around world-wide for years. One may wonder what population control would even mean for its partakers. As one researches population control, many different viewpoints are found. There are the viewpoints of those who are oblivious to the struggles an economy will be forced to face if population control is not put into effect; then, there are others who recognize the responsibility they have to care for their country the way it has cared for them forRead MoreThe Problem Of Ageing Population Essay979 Words   |  4 Pageshaving longer life due to economic well-being, better nutrition and impro vement of medical facilities. Ageing population has entailed an increasing share of old persons in the population. However, longer life expectancy has resulted in the ageing of population and has caused worldwide concerns of the problems it may consequently arouse. The two major reasons of the ageing population trend are the rising longevity and the decline of fertility rate. If the proportion of elderly people continuesRead MoreThe Problem Of Prison Population1615 Words   |  7 PagesPrison populations have grown substantially since the 70’s and there seems to be no slowing down this trend. State and local governments have become â€Å"tougher† on crime, examples include the three strikes and you’re out rule and the broken windows rule, which have lowered crime rates by increasing incarcerations. Many debates have gone on about the reasons of the overpopulation, the impacts on prisoners, the impacts on societies due to the prison overpopulation, and wha t the solution should be toRead MoreThe Problem Of Homeless Population1401 Words   |  6 Pagesintentions are not to cause there to be a vulnerability to poverty but to try to make a living some how. â€Å"Skid Row homeless population originates in South LA – where services and shelters lack resource adequacy and unemployment rates are high among adult men – and policy discussions rarely focus on this trend† (Howard, 4). Los Angeles Skid Row, is known to have the heaviest homeless population compared to other cities. Los Angeles does provide some homeless services like shelter and low-cost housing optionsRead MoreA Solution to the Population Problem:1724 Words   |  7 PagesIncreasing importance has been placed upon population and population growth over the course of the past few centuries. Scientists are frenetically searching for the solution to this issue, and their outcomes are bleak. They are telling the world that if population growth does not slow, the earth will swell to a capacity too large to sustain itself and the conclusion will be apocalyptic. Explanations are numerous, however viable solutions are difficult to find. Adherents to the Malthusian theoryRead MoreThe Problem Of Population And Growth Essay1261 Words   |  6 PagesPopulation and growth is a massive issue in the world today. It has held geographers’ interest for several years. There has been discussion and debate over the max amount of people that the world can sustain. Some geographers believe that the world is already overpopulated, while others believe that we are working toward a serious overpopulation problem in the future. One thing that everyone can agree on is some countries and regions, such as China, are vastly over populated, and other countriesRead MoreThe Problem Of Expanding Population Essay1363 Words   |  6 PagesThe topic of expanding population is one both of wonder and catastrophe, as civilizations have risen and fallen throughou t time. Collapse (Diamond) touches on this concept of population growth (or change) often, especially because a civilization is, after all, nothing without its population. Using Collapse, we can quickly start to analyze what a growing population entails. Early on, Diamond touches upon the implications of an expanding population. When a civilization starts to have a high rate ofRead MoreThe Population Problem Essay1424 Words   |  6 PagesThe Population Problem Imagine a world where there isnt enough clean water to drink and there isnt ample food to eat. We have used up most of the resources that we require to survive. What little that is left is so polluted that it is not fit to be used. Is this our future? What are we doing to keep this from happening? We recycle, we ride our bikes, we compost, but is this enough? It is up to us to find a cure to the ailment that is destroying our planet. We only have one Earth to sustainRead MoreThe Problem Of Prison Population1536 Words   |  7 PagesAfter decades trying to prevent it, prison population continued to increase. Mandatory minimum sentences and the privatization created more prisoners, rather than cutting down on the amount incarcerations. Since the 1980’s, the prison population has quadrupled and today one in every one hundred adults are in prison (Mandatory Minimums (HBO)). Infact, America leads the world in prisoners with 20% of the earth’s prison population (The Prison Crisis). In a study done by The American Prospect, charts

Monday, December 9, 2019

Illustrative Transactions and Financial Statements Answers free essay sample

Identify potential problems with regression data. 7. Evaluate the advantages and disadvantages of alternative cost estimates. 8. (Appendix A) Use Microsoft Excel to perform a regression analysis. 9. (Appendix B) Understand the mathematical relationship describing the learning phenomenon. Why Estimate Costs? Managers make decisions and need to compare costs and benefits among alternative actions. Good decision requires good information about costs, the better these estimates, the better the decision managers will make (Lanen, 2008).. Key Question What adds value to the firm? Good decisions. You saw in Chapters 3 and 4 that good decisions require good information about costs. Cost estimates are important elements in helping managers make decisions that add value to the company (Lanen, 2008). Learning Objective One: Understand the reasons for estimating fixed and variable costs The reasons for estimating fixed and variable costs The basic idea in cost estimation is to estimate the relation between costs and the variables affecting costs, the cost drivers. We focus on the relation between costs and one important variable that affect them: activity (Lanen, 2008). Basic Cost Behavior Patterns By now you understand the importance of cost behavior. Cost behavior is the key distinction for decision making. Costs behave as either fixed or variable (Lanen, 2008). Fixed costs are fixed in total, variable costs vary in total. On a per-unit basis, fixed costs vary inversely with activity and variable costs stay the same. Are you getting the idea? Cost behavior is critical for decision making. The formula that we use to estimate costs is similar cost equation: Total costs = fixed costs + {variable cost per unit} number of units T c = f + {v} x |With a change in Activity |In Total |Per Unit | |Fixed Cost |Fixed |Vary | |Variable |Vary |Fixed | What Methods are used to Estimate Cost Behavior? Three general methods used to estimate the relationship between cost behavior and activity levels that are commonly used in practice: Engineering estimates, Account analysis Statistical methods (Such as regression analysis) (Lanen, 2008). Results are likely to differ from method to method. Consequently, it’s a good idea to use more than one method so that results can be compared. These methods, therefore, should be seen as ways to help management arrive at the best estimates possible. Their weakness and strengths require attention. Learning Objective Two: Estimate costs using engineering estimates. Engineering Estimates Cost estimates are based on measuring and then pricing the work involved in a task. This method based on detailed plans and is frequently used for large projects or new products. This method often omits inefficiencies, such as downtime for unscheduled maintenance, absenteeism and other miscellaneous random events that affect the entire firm (Lanen, 2008). Identify the activities involved Labor |Rent |Insurance |Time |Cost | Advantages of engineering estimates |Details each step required to perform an operation |Permits comparison of other centers with similar operations | |Identifies strengths and weaknesses. | | Disadvantages of engineering estimates 1. Can be quite expensive to use. Learning Objective Three: Estimate costs using account analysis. Account Analysis Estimating costs using account analysis involves a review of each account making up the total costs being analyzed and identifying each cost as either fixed or variable, depending on the relation between the cost and some activity. Account analysis relies heavily on personal judgment. This method is often based on last period’s cost along and is subject to managers focusing on specific issues of the previous period even though these might be unusual and infrequent(Lanen, 2008) . Example: Account Analysis (Exhibit 5. 1) |3C Cost Estimation Using Account Analysis | |Costs for 360 Repair Hours | |Account |Total |Variable Cost |Fixed Cost | |Office Rent $3,375 |$1,375 |$2,000 | |Utilities |310 |100 |210 | |Administration |3,386 |186 |3,200 | |Supplies |2,276 |2,176 |100 | |Training |666 |316 |350 | |Other |613 |257 |356 | |Total |$10,626 |$4,410 |$6,216 | |Per Repair Hour |$12. 25 ($4,410 divided by 360 repair-hours) | 3C Cost Estimation Using Account Analysis (Costs at 360 Repair-Hours. A unit is a repair- hour) Total costs = fixed costs + {variable cost per unit} number of units T c = f + {v} x |$10,626 = $6,216 + $12. 25 (360) |$10,626 = $6,216 + $$4,410 | Costs at 520 Repair-Hours Total costs = fixed costs + {variable cost per unit} number of units |Tc = $6,216 + {$12. 25} 520 |Total costs = $6,216 + $ $6,370 |$12,586 = $6,216 + $ $6,370 | Advantage of Account Analysis 1. Managers and accountants are familiar with company operations and the way costs react to changes in activity levels. Disadvantages of Account Analysis 1. Managers and accountants may be biased. 2. Decisions often have major economic consequences for managers and accountants. Learning Objective Four: Estimate costs using statistical analysis. The statistical analysis deals with both random and unusual events is to use several periods of operation or several locations as the basis for estimating cost relations . We can do this by applying statistical theory, which allows for random events to be separated from the underlying relation between costs and activities. A statistical cost analysis analyzes costs within the relevant range using statistics. Do you remember how we defined relevant range? A relevant range is the range of activity where a cost estimate is valid. The relevant range for cost estimation is usually between the upper and lower limits of past activity levels for which data is available (Lanen, 2008). Example: Overhead Costs for 3C ( Exhibit 5. 2) The following information is used throughout this chapter: Here we have the overhead costs data for 3C for the last 15 months. Let’s use this data to estimate costs using a statistical analysis. |Month |Overhead Costs |Repair-Hours |Month |Overhead Costs |Repair-Hours | |1 |$9,891 |248 |8 |$10,345 |344 | |2 $9,244 |248 |9 |$11,217 |448 | |3 |$13,200 |480 |10 |$13,269 |544 | |4 |$10,555 |284 |11 |$10,830 |340 | |5 |$9,054 |200 |12 |$12,607 |412 | |6 |$10,662 |380 |13 |$10,871 |384 | |7 |$12,883 |568 |14 |$12,816 |404 | | | | |15 |$8,464 |212 | A. Scattergraph Plot of cost and activity levels Does it look like a relationship exists between repair-hours and overhead costs? We will start with a scatter graph. A scatter graph is a plot of cost and activity levels. This gives us a visual representation of costs. Does it look like a relationship exists between repair-hours and overhead cost? We use â€Å"eyeball judgment† to determine the intercept and slope of the line. Now we â€Å"eyeball† the scatter graph to determine the intercept and the slope of a line through the data points. Do you remember graphing our total cost in Chapter 3? Where the total cost line intercepts the horizontal or Y axis represents fixed cost. What we are saying is the intercept equals fixed costs. The slope of the line represents the variable cost per unit. So we use â€Å"eyeball judgment† to determine fixed cost and variable cost per unit to arrive at total cost for a given level of activity. As you can imagine, preparing an estimate on the basis of a scatter graph is subject to a high level of error. Consequently, scatter graphs are usually not used as the sole basis for cost estimates but to illustrate the relations between costs and activity and to point out any past data items that might be significantly out of line. B. High-Low Cost Estimation A method to estimate costs based on two cost observations, usually at the highest and lowest activity level. Although the high-low method allows a computation of estimates of the fixed and variable costs, it ignores most of the information available to the analyst. The high-low method uses two data points to estimate costs (Lanen, 2008). Another approach: Equations V = Cost at highest activity Cost at lowest activity Highest activity Lowest activity F = Total cost at highest activity level V (Highest activity) Or F = Total cost at lowest activity level V (Lowest activity) Let’s put the numbers in the equations | | | |V = $12,883 $9,054 |V = $10. 0/RH | |568 – 200 | | F = Total cost at highest activity level V (Highest activity) F = $12,883 $10. 40 (568), F= $6,976 Or F = Total cost at lowest activity level V (Lowest activity) F = $9,054 $10. 40 (200) Rounding Difference C. Statistical Cost Estimation Using Regression Analysis Statistical procedure to determine the relationship between variables High-Low Method: Uses two data points. Regression analysis Regression is a statistical procedure that uses all the data points to estimate costs. [pic] Regression Analysis Regression statistically measures the relationship between two variables, activities and costs. Regression techniques are designed to generate a line that best fits a set of data points. In addition, regression techniques generate information that helps a manager determine how well the estimated regression equation describes the relations between costs and activities (Lanen, 2008). We recommend that users of regression (1) fully understand the method and its limitations (2) specify the model, that is the hypothesized relation between costs and cost predictors (3) know the characteristics of the data being tested (4) examine a plot of the data . For 3C, repair-hours are the activities, the independent variable or predictor variable. In regression, the independent variable or predictor variable is identified as the X term. An overhead cost is the dependent variable or Y term. What we are saying is; overhead costs are dependent on repair-hours, or predicted by repair-hours. The Regression Equation |Y = a + bX |Y = Intercept + (Slope) X |OH = Fixed costs + (V) Repair-hours | You already know that an estimate for the costs at any given activity level can be computed using the equation TC = F + VX. The regression equation, Y= a + bX represents the cost equation. Y equals the intercept plus the slope times the number of units. When estimating overhead costs for 3C, total overhead costs equals fixed costs plus the variable cost per unit of repair-hours times the number of repair-hours. We leave the description of the computational details and theory to computer and statistics course; we will focus on the use and interpretation of regression estimates. We describe the steps required to obtain regression estimates using Microsoft Excel in Appendix A to this chapter. Learning Objective Five: Interpret the results of regression output. Interpreting Regression [pic] Interpreting regression output allows us to estimate total overhead costs. The intercept of 6,472 is total fixed costs and the coefficient, 12. 52, is the variable cost per repair-hours. Correlation coefficient â€Å"R† measures the linear relationship between variables. The closer R is to 1. 0 the closer the points are to the regression line. The closer R is to zero, the poorer the regression line (Lanen, 2008). Coefficient of determination â€Å"R2† The square of the correlation coefficient. The proportion of the variation in the dependent variable (Y) explained by the independent variable(s)(X). T-Statistic The t-statistic is the value of the estimated coefficient, b, divided by its standard error. Generally, if it is over 2, then it is considered significant. If significant, the cost is NOT totally fixed. The significant level of the t-statistics is called the p-value. Continuing to interpret the regression output, the Multiple R is called the correlation coefficient and measures the linear relationship between the independent and dependent variables. R Square, the square of the correlation cost efficient, determines and identifies the proportion of the variation in the dependent variable, in this case, overhead costs, that is explained by the independent variable, in this case, repair-hours. The Multiple R, the correlation coefficient, of . 91 tells us that a linear relationship does exist between repair-hours and overhead costs. The R Square, or coefficient of determination, tells us that 82. 8% of the changes in overhead costs can be explained by changes in repair-hours. Can you use this regression output to estimate overhead costs for 3C at 520 repair-hours? Multiple Regressions Multiple regressions are used when more than one predictor (x) is needed to adequately predict the value (Lanen, 2008). For example, it might lead to more precise results if 3C uses both repair hours and the cost of parts in order to predict the total cost. Let’s look at this example. |Predictors: |X1: Repair-hours |X2: Parts Cost | 3C Cost Information | |Month |Overhead Costs |Repair-Hours ( X1) |Parts ( X2) | |1 |$9,891 |248 |$1,065 | |2 |$9,244 |248 |$1,452 | |3 |$13,200 |480 |$3,500 | |4 |$10,555 |284 |$1,568 | |5 |$9,054 |200 |$1,544 | |6 |$10,662 |380 |$1,222 | |7 |$12,883 |568 |$2,986 | |8 |$10,345 |344 |$1,841 | |9 |$11,217 |448 |$1,654 | |10 |$13,269 |544 |$2,100 | |11 |$10,830 |340 |$1,245 | |12 |$12,607 |412 |$2,7 00 | |13 |$10,871 |384 |$2,200 | |14 |$12,816 |404 |$3,110 | |15 |$8,464 |212 |$ 752 | In multiple regressions, the Adjusted R Square is the correlation coefficient squared and adjusted for the number of independent variables used to make the estimate. Reading this output tells us that 89% of the changes in overhead costs can be explained by changes in repair-hours and the cost of parts. Remember 82. % of the changes in overhead costs were explained when one independent variable, repair-hours, was used to estimate the costs. Can you use this regression output to estimate overhead costs for 520 repair-hours and $3,500 cost of parts? Learning Objective Six: Identify potential problems with regression data. Implementation Problems It’s easy to be over confident when interpreting regression output. It all looks so official. But beware of some potential problems with regression data. We already discussed in earlier chapters that costs are curvilinear and cost estimations are only valid within the relevant range. Data may also include outliers and the relationships may be spurious. Let’s talk a bit about each. Curvilinear costs |Outliers |Spurious relations |Assumptions | 1. Curvilinear costs Problem: Attempting to fit a linear model to nonlinear data. Likely to occur near full-capacity. Solution: Define a more limited relevant range (example: from 25 – 75% capacity) or design a nonlinear model. If the cost function is curvilinear, then a linear model contains weaknesses. This generally occurs when the firm is at or near capacity. The leaner cost estimate understates the slope of the cost line in the ranges close capacity. This situation is shown in exhibit 5. 5. 2. Outliers Problem: Outlier moves the regression line. Solution: Prepare a scatter-graph, analyze the graph and eliminate highly unusual observations before running the regression. Because regression calculates the line that best fits the data points, observations that lie a significant distance away from the line could have an overwhelming effect on the regression estimate. Here we see the effect of one significant outlier. The computed regression line is a substantial distance from most of the points. The outlier moves the regression line. Please refer exhibit 5. 6. 3. Spurious or false relations Problem: Using too many variables in the regression. For example, using direct labor to explain materials costs. Although the association is very high, actually both are driven by output. Solution: Carefully analyze each variable and determine the relationship among all elements before using in the regression. 4. Assumptions Problem: If the assumptions in the regression are not satisfied then the regression is not reliable. Solution: No clear solution. Limit time to help assure costs behavior remains constant, yet this causes the model to be weaker due to less data. Learning Objective Seven: Evaluate the advantages and disadvantages of alternative cost estimation methods. Statistical Cost Estimation Advantages 1. Reliance on historical data is relatively inexpensive. 2. Computational tools allow for more data to be used than for non-statistical methods. Disadvantages 1. Reliance on historical data may be the only readily available, cost-effective basis for estimating costs. 2. Analysts must be alert to cost-activity changes. Choosing an Estimation Method Each cost estimation method can yield a different estimate of the costs that are likely to result from a particular management decision. This underscores the advantage of using more than one method to arrive at a final estimate. Which method is the best? Management must weigh the cost-benefit related to each method (Lanen, 2008). Estimated manufacturing overhead with 520 repair-hours and $3,500 parts costs *. The more sophisticated methods yield more accurate cost estimates than the simple methods. |Account Analysis = $12,586 |High-Low = $12,384 |Regression= $12,982 |Multiple Regression= $13,588* | Data Problems Missing data Outliers Allocated and discretionary costs Inflation Mismatched time periods No matter what method is used to estimate costs, the results are only as good as the data used. Collecting appropriate data is complicated by missing data, outliers, allocated and discretionary costs, inflation and mismatched time periods. Learning Objective Eight: (Appendix A) Use Microsoft Excel to perform a regression analysis. Appendix A: Microsoft as a Tool Many software programs exist to aid in performing regression analysis. In order to use Microsoft Excel, the Analysis Tool Pak must be installed. There are software packages that allow users to easily generate a regression analysis. The analyst must be well schooled in regression in order to determine the meaning of the output! Learning Objective Nine: (Appendix B) Understand the mathematical relationship describing the learning phenomenon. Learning Phenomenon Leaning phenomenon refers to the systematic relationship between the amount of experience in performing a task and the time required to perform it. The learning phenomenon means that the variable costs tend to decrease per unit as the volume increase. Example: | |Unit |Time to Produce |Calculation of Time | |First Unit |100 hours |(assumed) | |Second Unit |80 hours |(80 percent x 100 hours | |Fourth Unit |64 hours |(80 percent x 80 hours | |Eighth Unit |51. hours |(80 percent x 64 hours | |Impact: Causes the unit price to decrea se as production increases. This implies a nonlinear model. | Another element that can change the shape of the total cost curve is the notion of a learning phenomenon. As workers become more skilled they are able to produce more output per hour. This will impact the total cost curve since it leads to a lower per unit cost, the higher the output. Chapter 5: END!! COURSE WORK EXERCISE 5-25 – A B PROBLEM 5-47 -A B REFERENCES Lanen , N. W. , Anderson ,W. Sh. Maher ,W. M. ( 2008). Fundamentals of cost accounting. New York : McGraw-Hill Irwin. [pic]

Monday, December 2, 2019

The PC of the future Major developments in the hardware and software Essay Example

The PC of the future Major developments in the hardware and software Essay To the present computers only they have left two generations more to be able to continue being at the same time smaller and more powerful, the two generations that calculate that they allow the present technologies of miniaturization of its basic circuits. The perspective of not being able to maintain this tendency does not please anything to the physicists and computer science technicians, reason why, supported by the great companies of the sector, are looking for new approaches completely for the computers of the future. No of these approaches appears simple but all are suggestive, although to risk to imagine one of these computers molecular, quantum or from DNA is still premature. Whatever it buys a computer nowadays knows that it will be obsolete in a pair of years. Now we give by seated the inexorable increase of the power of the computers. But that cannot follow eternally thus, at least, if the computers continue being based on the present technologies. Gordon Moore, cofounder of Intel and one of gurus of the technology of the information, anticipate that the existing methods of miniaturization only will offer two generations more of computers before its capacity is exhausted. In 1965, Moore made a prediction that was confirmed with amazing precision in the three following decades: the power of the computers would duplicate every 18 months. This increase has been due mainly to the more and more small size of the electronic components, so that every time a microprocessor or chip can be introduced more of them in. A modern chip of only half square centimeter contains many million tiny electronic components like the transistors. Each one measures less than one micron of diameter, more or less the hundredth part of the thickness of a human hair. We will write a custom essay sample on The PC of the future Major developments in the hardware and software specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on The PC of the future Major developments in the hardware and software specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on The PC of the future Major developments in the hardware and software specifically for you FOR ONLY $16.38 $13.9/page Hire Writer These components are done basically of silicon, that the electricity leads, and of silicon dioxide, that is an insulator. In order to record cards of circuit in silicon microprocessors a called technique is used at the moment photolithograph, by means of which a polymer film forms on the layers of silicon or silicon dioxide that takes the scheme of the set of circuits. The pattern of the circuit records itself in the film of polymer exposing it to the light through a mask. Next chemical substances of engraving are applied that corrode the silicon material no protected. Limitation The size of the elements that can be created by means of this procedure is limited by the wavelength of the used light to fix the pattern. At the moment, they can get to only measure one-fifth part of one micron. But to create still more small electronic components up to one tenth part of one micron of diameter the manufacturers of microprocessors they will need to decide on a radiation of a shorter wavelength: the ultraviolet light of smaller length, x-rays or the electron beams of high energy. The great ones of the computers have still not been agreed on what class to choose, but, in any case, the costs of the development of the new technology and the later variation of the production process will be enormous. IBM, Motorola, Lucent Technologies and Lockheed Martin have been forced to collaborate in the development of the x-rays lithography. But the miniaturization is not limited solely by the photolithograph. Although can be devised methods to make transistors and other devices of a still smaller size, will continue working effectively? The law of Moore anticipates that, for year 2002, the smallest element of a silicon transistor, the insulator of the door, it will have a diameter of only 4 or 5 atoms. Will continue providing the necessary isolation this so fine layer? This question has been investigated recently by the physicist David Miller and his companions of Lucent Technologies. They used manufacture technologies outposts to obtain a silicon dioxide film of a thickness of 5 atoms that introduced between two silicon layers. In comparison, the commercial microprocessors have insulators of about 25 atoms of thickness. Miller and its companions discovered that its ultra thin insulating oxide no longer was able to isolate the silicon layers. The investigators calculated that an insulator of an inferior thickness to 4 atoms of wide would have so many losses that would be useless. In fact, due to the limitations to make smooth films, perfectly even insulating with the thickness double they would begin to break it if they made with the present methods. Therefore, the conventional silicon transistors will have reached their minimum operative dimensions in only one decade more or less. Many computer science technologists affirm that, at the moment, the silicon is what there is; but he can that what there is finishes soon. On the other hand, to try to imagine the computer of the future is to risk seeming as absurd as the science fiction of the Fifties. Nevertheless, judging by the present dreams of the technologists, we will be able to do without the plastic boxes and the silicon Chips. Some say that the computers will be looked more like organisms; their cables and switches will be compound of individual organic molecules. Others speak to practice computer science in a water bucket, sprinkled with fibers of DNA, the genetic material of the cells, or enriched with molecules that manipulate data like answer to the vibrations of radio waves. A thing seems safe: so that the computers have power more and more, their components, the basic elements of the logic circuits, will incredibly have to be tiny. If the present tendency to the miniaturization persists, these components will reach the size of individual molecules in less of a pair of decades, since we have seen. The scientists already are examining nanotubes called the carbon molecule use like cables of conventional molecular size that they can be used to connect component of silicon of solid state. The nanotubes of carbon can measure only a few millionth of millimeter, that is to say, few nanometers, that are equivalent to less than one tenth part of the diameter of cables smaller than they are possible to be recorded in the silicon Chips commercial. One is hollow pure carbon tubes, which are extremely strong and have the added attraction of which some of them lead the electricity. The scientists of the Stanford University in California have cultivated from nanotubes gas carbon methane that connect two terminals of electronic components. But the connection of cables is the easy part. Can the molecules process binary information? That is to say, they can combine sequences of bits (and zeros codified like electrical impulses in the present computers) like the doors logics composed of transistors and other devices of the silicon Chips? In a logic operation, some zeros and combinations in the entrance signals generate other combinations in the exit signals. This way, the data are compared, ordered, added, multiplied or manipulated of other forms. Individual molecules have carried out some operations logics, with the bits codified not like electrical impulses, but like impulses of light or other molecular components. For example, a molecule could unload a photon a luminous particle if it received a loaded metal atom and a photon of a different color, but not if it received only one of both. Nevertheless, nobody has a real idea of how connecting these molecules to a trustworthy and complex circuit that serves to calculate, an authentic molecular computer. Some detractors say that molecular computer science never will be viable. Calculations with DNA At the beginning of the Nineties, Leonard Adleman, of the University of California of the South, it proposed a form different to use molecules to calculate, and indicated that the data base of the own cell the DNA it is possible to be used to solve calculation problems. Adleman realized which the DNA basically a chain of four different molecular components or bases that act as a code of four letters of the genetic information is looked remarkably like the universal computer postulated in the Thirties by the mathematical genius Alan Turing, who stores binary information in a tape. Different chains from bases can voluntarily be programmed in synthetic DNA fibers using the techniques of the modern biotechnology; and later these fibers can be generated, be cut and be assembled in enormous amounts. Could be used these methods to convince to the DNA that it calculated like a machine of Turing? Adleman saw that the system of the DNA could be specially apt to solve minimization problems, like for example finding the route shortest to connect several cities. This kind of problems is one of which it more costs to them to solve to the conventional computers, since the number of possible routes increases very quickly as more cities are included. A current computer takes much in examining all those options. But if each possible solution is codified in a DNA fiber, the problem does not seem so terrible, because a simple one even picks of DNA contains many trillions of molecules. So that only it is necessary to separate the DNA fibers that they have codified the best solution. This can be done using biotechnological methods that recognize specific short sequences of the bases of a fiber of ADN. This procedure is not more than a slightly little orthodox form to find a solution: in the first place, to find all the solutions possible and later to use operations logics to choose the correct one. But, as everything happens parallelly all the possible solutions are created and examined to the same time the process it can be very fast. The calculation by DNA has been demonstrated in principle, but it has still not been proven that solves problems that a conventional computer cannot solve. It seems more apt for a quite specific set of problems, like the minimization and the codification that like method of calculation for questions of all type. The quantum world Already in the Sixties, some computer science scientists noticed themselves of where he took the miniaturization to them: towards the quantum kingdom, where the non-logical rules of the quantum mechanics govern the behavior of the matter. As the conventional devices of the circuits become smaller, the quantum effects become a more and more important aspect of their behavior. It could be feasible, were asked, turn this possible complication an advantage? This suggestion gave fruit in the Eighties, when the physicists began to observe kindly how he could operate a computer under the influence of the quantum mechanics. What they discovered was that it could win enormously in speed. The crucial difference between processing information in the quantum world and the classic one is that first he is not black and white. In a classic computer, all the bits of information are or a thing or another one: or a 1 or a 0. But a quantum bit, qubit, can be a mixture of both. The quantum objects can exist in a superposition of states that is classically exclusive, like the famous cat of Schrà ¯Ã‚ ¿Ã‚ ½dinger that is not nor alive, nor dead, but in a superposition of the two things. This means that a series of quantum switches objects in defined quantum states good, as atoms in different states from excitation have enough more configurations of qubits than the corresponding classic series of bits. For example, whereas a classic memory of three bits can store only one of the eight possible configurations of and zeros, the corresponding quantum series can store the eight, in a superposition of states. This multiplicity of states gives to the quantum computers enough more power and, therefore, enough more speed, than to its classic companions. But, in fact, to shape these ideas in a physical device supposes an extraordinary challenge. A quantum superposition of states is a thing very delicate, and difficult to maintain, mainly if it is extended by an enormous set of logical elements. Once this superposition begins to interact with its surroundings, it begins to collapse and the environs lose the quantum information. Some investigators think that this problem will return quantum computer science to great scale in which great amounts of data are manipulated in multitude of steps impossibly delicate and difficult to handle. But the problem has been lessen in the last years by the development of algorithms that will allow working to the quantum computers, in spite of the small errors introduced by this type of losses. MAJOR DEVELOPMENTS IN THE SOFTWARE Introduction Software Engineering is not a 100% science. All the algorithms are made after the logical, the political and the personal surroundings of the programmer. To talk about the future of software, we have to know a few historical facts. After that, we will have to choose our side of the software wars, between those who defend the open source code policy, and the close source policy. The Software wars Internet would not exist without free software. In the years the 60 Bell labs already yielded the source code of his just invented Operating system UNIX, and from that time last to the last version of the Linux nucleus, the history of software has been based on the exchange of information. The fundamental base of the revolution of the society-network is that interchange that is constructing the movement of the Open Code. A field of the technologies of the information and the communication is free software that surely does not have decrease problems. It is a movement that every time is become greater and than it has had in these last years an extraordinary advance. The statistics usually are eloquent. The last year a 50 percent of the software developers already had thought migrating their developments to Open Code. As powerful applications as the suite of computer science Star Office de Sun or the technology of servant in streaming of Real Networks have served like motor tractor of so many other known applications less than also they are being directed towards the free development of his code. The force of this revolution in computer science and the telecommunications is represented by values and a philosophy unknown until the moment. It is the force of the community and the work in group after resolving tasks and objectives that they acquire of by himself a special value in the developers, which are compensated of a no-pecuniary form that was unsuspected until now at the time in which already the protestant ethics has prevailed anywhere in the world western and the values of the work that takes prepared. Students of the technologies and their implications like M. Castells, R. Stallman, P. Himannen, L. Torvalds and Jesus G. Barahona speak to us constantly of the possibilities that open homo digitalis to him to the future reach more knowledge in thanks to the adoption of agreed policies with the founders of this movement based on sharing the code and the knowledge by the mutual good. The movement represented by the Free Software Foundation is something that goes beyond the mere election of policies of development of new Technologies of the Information and the Communication. When bet by the development in opened code, the adoption of standards and the support to free operating systems, is being affected the knowledge of the members of the digital society and not in the mere support to the consumption of computer science by the fact that it is acceded to his use, immediately. In a digital society the use is so important as the knowledge of the tools and the development of these since he is this indeed what gives to be able to the citizens and the organizations. With the adoption of computer science policies based on free software knowledge of networks and code also occurs to the users, with which they can take a fundamental paper from nonpassive actors in the digital revolution. But everything what represents east movement is not compatible with the policies of the great company that nowadays exerts the worldwide control of computer science (Microsoft). The company of the State of Washington is being with most of the souls of the users worldwide population of the Network and the tools of office software and operating systems of workstations. He is this something undeniable, like also it must be for the administrations the systems by which these companies are going to remove data from the users to create profiles and data bases that to knowing where they will finish someday. Being the one of Redmond (Microsoft) a company of a nation that prohibits the safe encriptation to 1024 bits for its subjects, how we are going to have the users of the planet confidence in the security policies that apparently are going to us to sell. And that is thus, although until the own Department of Defense of the U.S.A. trusts in the Open Source and its systems of encriptation, an d it uses itself them. On the contrary, the one that already has proven version XP of Microsoft well knows what is the control via network of the data of the user and its number MAC of computer. And before it, little people have left to fight against those policies. The networks of laboratories of hackers and other groupings of people who affect the education of the free software tools which they are based on the knowledge necessary to maintain servants, to publish without censorship, to develop programs, to give courses of computer science, etc. are an alternative that already is giving its fruits. Gurus of the digital era has full name that goes united to these movements in some stage of their life. The father of all this form to think is Richard Stallman and the most well known image is the one of Linux Torvalds, who not long ago occurred the prize him to the best European industrialist. Linux and Richard are the pieces key in all this revolution based on the freedom, the work in-group via network and the pure satisfaction by the made work. The competition which they can exert certain companies of little is going to be worth before this movement which it is essential for the technologies based on the side of the servant, and will have to ente r itself in him, since it has made IBM and Sun to begin to include/understand his potential and to remove benefit from it. To put a simile, we imagine that the community of doctors and medicine investigators worldwide worked in network sharing their knowledge at any moment of the day and received as it compensates the solution at the moment to all the problems that appeared to them. With this system many of the present diseases would be already for a long time eradicated. In addition, in this example, the professionals with great tied pays to policies of maximum secret in the laboratories of investigation little would have to say before the greater force than she acquires the movement developed by the Network and that the knowledge has its base in sharing. Those that the difference between languages and free development systems like PHP know about, ZOPE, Perl, etc know well until where it is possible to be arrived with the free code. However, those that only know proprietary and clo sed technologies hardly will be able to get to watch the future, since they go to a technological slavery. Conclusion Computer science is a complex science but of which people create. Much people do not know that it is a science with two branches different one from the other, but employees. The architecture of the computer and software to be able to use it are very important. But all their possible uses are so many, that specialists are needed, like in the medicine, of each one of their parts. There per 1947, when the transistor was invented, and when Jaquard (1804) designed a loom that performed predefined tasks through feeding punched cards into a reading contraption; nobody imagined how quickly that it would take to get the nowadays supercomputers.