Monday, December 9, 2019

Illustrative Transactions and Financial Statements Answers free essay sample

Identify potential problems with regression data. 7. Evaluate the advantages and disadvantages of alternative cost estimates. 8. (Appendix A) Use Microsoft Excel to perform a regression analysis. 9. (Appendix B) Understand the mathematical relationship describing the learning phenomenon. Why Estimate Costs? Managers make decisions and need to compare costs and benefits among alternative actions. Good decision requires good information about costs, the better these estimates, the better the decision managers will make (Lanen, 2008).. Key Question What adds value to the firm? Good decisions. You saw in Chapters 3 and 4 that good decisions require good information about costs. Cost estimates are important elements in helping managers make decisions that add value to the company (Lanen, 2008). Learning Objective One: Understand the reasons for estimating fixed and variable costs The reasons for estimating fixed and variable costs The basic idea in cost estimation is to estimate the relation between costs and the variables affecting costs, the cost drivers. We focus on the relation between costs and one important variable that affect them: activity (Lanen, 2008). Basic Cost Behavior Patterns By now you understand the importance of cost behavior. Cost behavior is the key distinction for decision making. Costs behave as either fixed or variable (Lanen, 2008). Fixed costs are fixed in total, variable costs vary in total. On a per-unit basis, fixed costs vary inversely with activity and variable costs stay the same. Are you getting the idea? Cost behavior is critical for decision making. The formula that we use to estimate costs is similar cost equation: Total costs = fixed costs + {variable cost per unit} number of units T c = f + {v} x |With a change in Activity |In Total |Per Unit | |Fixed Cost |Fixed |Vary | |Variable |Vary |Fixed | What Methods are used to Estimate Cost Behavior? Three general methods used to estimate the relationship between cost behavior and activity levels that are commonly used in practice: Engineering estimates, Account analysis Statistical methods (Such as regression analysis) (Lanen, 2008). Results are likely to differ from method to method. Consequently, it’s a good idea to use more than one method so that results can be compared. These methods, therefore, should be seen as ways to help management arrive at the best estimates possible. Their weakness and strengths require attention. Learning Objective Two: Estimate costs using engineering estimates. Engineering Estimates Cost estimates are based on measuring and then pricing the work involved in a task. This method based on detailed plans and is frequently used for large projects or new products. This method often omits inefficiencies, such as downtime for unscheduled maintenance, absenteeism and other miscellaneous random events that affect the entire firm (Lanen, 2008). Identify the activities involved Labor |Rent |Insurance |Time |Cost | Advantages of engineering estimates |Details each step required to perform an operation |Permits comparison of other centers with similar operations | |Identifies strengths and weaknesses. | | Disadvantages of engineering estimates 1. Can be quite expensive to use. Learning Objective Three: Estimate costs using account analysis. Account Analysis Estimating costs using account analysis involves a review of each account making up the total costs being analyzed and identifying each cost as either fixed or variable, depending on the relation between the cost and some activity. Account analysis relies heavily on personal judgment. This method is often based on last period’s cost along and is subject to managers focusing on specific issues of the previous period even though these might be unusual and infrequent(Lanen, 2008) . Example: Account Analysis (Exhibit 5. 1) |3C Cost Estimation Using Account Analysis | |Costs for 360 Repair Hours | |Account |Total |Variable Cost |Fixed Cost | |Office Rent $3,375 |$1,375 |$2,000 | |Utilities |310 |100 |210 | |Administration |3,386 |186 |3,200 | |Supplies |2,276 |2,176 |100 | |Training |666 |316 |350 | |Other |613 |257 |356 | |Total |$10,626 |$4,410 |$6,216 | |Per Repair Hour |$12. 25 ($4,410 divided by 360 repair-hours) | 3C Cost Estimation Using Account Analysis (Costs at 360 Repair-Hours. A unit is a repair- hour) Total costs = fixed costs + {variable cost per unit} number of units T c = f + {v} x |$10,626 = $6,216 + $12. 25 (360) |$10,626 = $6,216 + $$4,410 | Costs at 520 Repair-Hours Total costs = fixed costs + {variable cost per unit} number of units |Tc = $6,216 + {$12. 25} 520 |Total costs = $6,216 + $ $6,370 |$12,586 = $6,216 + $ $6,370 | Advantage of Account Analysis 1. Managers and accountants are familiar with company operations and the way costs react to changes in activity levels. Disadvantages of Account Analysis 1. Managers and accountants may be biased. 2. Decisions often have major economic consequences for managers and accountants. Learning Objective Four: Estimate costs using statistical analysis. The statistical analysis deals with both random and unusual events is to use several periods of operation or several locations as the basis for estimating cost relations . We can do this by applying statistical theory, which allows for random events to be separated from the underlying relation between costs and activities. A statistical cost analysis analyzes costs within the relevant range using statistics. Do you remember how we defined relevant range? A relevant range is the range of activity where a cost estimate is valid. The relevant range for cost estimation is usually between the upper and lower limits of past activity levels for which data is available (Lanen, 2008). Example: Overhead Costs for 3C ( Exhibit 5. 2) The following information is used throughout this chapter: Here we have the overhead costs data for 3C for the last 15 months. Let’s use this data to estimate costs using a statistical analysis. |Month |Overhead Costs |Repair-Hours |Month |Overhead Costs |Repair-Hours | |1 |$9,891 |248 |8 |$10,345 |344 | |2 $9,244 |248 |9 |$11,217 |448 | |3 |$13,200 |480 |10 |$13,269 |544 | |4 |$10,555 |284 |11 |$10,830 |340 | |5 |$9,054 |200 |12 |$12,607 |412 | |6 |$10,662 |380 |13 |$10,871 |384 | |7 |$12,883 |568 |14 |$12,816 |404 | | | | |15 |$8,464 |212 | A. Scattergraph Plot of cost and activity levels Does it look like a relationship exists between repair-hours and overhead costs? We will start with a scatter graph. A scatter graph is a plot of cost and activity levels. This gives us a visual representation of costs. Does it look like a relationship exists between repair-hours and overhead cost? We use â€Å"eyeball judgment† to determine the intercept and slope of the line. Now we â€Å"eyeball† the scatter graph to determine the intercept and the slope of a line through the data points. Do you remember graphing our total cost in Chapter 3? Where the total cost line intercepts the horizontal or Y axis represents fixed cost. What we are saying is the intercept equals fixed costs. The slope of the line represents the variable cost per unit. So we use â€Å"eyeball judgment† to determine fixed cost and variable cost per unit to arrive at total cost for a given level of activity. As you can imagine, preparing an estimate on the basis of a scatter graph is subject to a high level of error. Consequently, scatter graphs are usually not used as the sole basis for cost estimates but to illustrate the relations between costs and activity and to point out any past data items that might be significantly out of line. B. High-Low Cost Estimation A method to estimate costs based on two cost observations, usually at the highest and lowest activity level. Although the high-low method allows a computation of estimates of the fixed and variable costs, it ignores most of the information available to the analyst. The high-low method uses two data points to estimate costs (Lanen, 2008). Another approach: Equations V = Cost at highest activity Cost at lowest activity Highest activity Lowest activity F = Total cost at highest activity level V (Highest activity) Or F = Total cost at lowest activity level V (Lowest activity) Let’s put the numbers in the equations | | | |V = $12,883 $9,054 |V = $10. 0/RH | |568 – 200 | | F = Total cost at highest activity level V (Highest activity) F = $12,883 $10. 40 (568), F= $6,976 Or F = Total cost at lowest activity level V (Lowest activity) F = $9,054 $10. 40 (200) Rounding Difference C. Statistical Cost Estimation Using Regression Analysis Statistical procedure to determine the relationship between variables High-Low Method: Uses two data points. Regression analysis Regression is a statistical procedure that uses all the data points to estimate costs. [pic] Regression Analysis Regression statistically measures the relationship between two variables, activities and costs. Regression techniques are designed to generate a line that best fits a set of data points. In addition, regression techniques generate information that helps a manager determine how well the estimated regression equation describes the relations between costs and activities (Lanen, 2008). We recommend that users of regression (1) fully understand the method and its limitations (2) specify the model, that is the hypothesized relation between costs and cost predictors (3) know the characteristics of the data being tested (4) examine a plot of the data . For 3C, repair-hours are the activities, the independent variable or predictor variable. In regression, the independent variable or predictor variable is identified as the X term. An overhead cost is the dependent variable or Y term. What we are saying is; overhead costs are dependent on repair-hours, or predicted by repair-hours. The Regression Equation |Y = a + bX |Y = Intercept + (Slope) X |OH = Fixed costs + (V) Repair-hours | You already know that an estimate for the costs at any given activity level can be computed using the equation TC = F + VX. The regression equation, Y= a + bX represents the cost equation. Y equals the intercept plus the slope times the number of units. When estimating overhead costs for 3C, total overhead costs equals fixed costs plus the variable cost per unit of repair-hours times the number of repair-hours. We leave the description of the computational details and theory to computer and statistics course; we will focus on the use and interpretation of regression estimates. We describe the steps required to obtain regression estimates using Microsoft Excel in Appendix A to this chapter. Learning Objective Five: Interpret the results of regression output. Interpreting Regression [pic] Interpreting regression output allows us to estimate total overhead costs. The intercept of 6,472 is total fixed costs and the coefficient, 12. 52, is the variable cost per repair-hours. Correlation coefficient â€Å"R† measures the linear relationship between variables. The closer R is to 1. 0 the closer the points are to the regression line. The closer R is to zero, the poorer the regression line (Lanen, 2008). Coefficient of determination â€Å"R2† The square of the correlation coefficient. The proportion of the variation in the dependent variable (Y) explained by the independent variable(s)(X). T-Statistic The t-statistic is the value of the estimated coefficient, b, divided by its standard error. Generally, if it is over 2, then it is considered significant. If significant, the cost is NOT totally fixed. The significant level of the t-statistics is called the p-value. Continuing to interpret the regression output, the Multiple R is called the correlation coefficient and measures the linear relationship between the independent and dependent variables. R Square, the square of the correlation cost efficient, determines and identifies the proportion of the variation in the dependent variable, in this case, overhead costs, that is explained by the independent variable, in this case, repair-hours. The Multiple R, the correlation coefficient, of . 91 tells us that a linear relationship does exist between repair-hours and overhead costs. The R Square, or coefficient of determination, tells us that 82. 8% of the changes in overhead costs can be explained by changes in repair-hours. Can you use this regression output to estimate overhead costs for 3C at 520 repair-hours? Multiple Regressions Multiple regressions are used when more than one predictor (x) is needed to adequately predict the value (Lanen, 2008). For example, it might lead to more precise results if 3C uses both repair hours and the cost of parts in order to predict the total cost. Let’s look at this example. |Predictors: |X1: Repair-hours |X2: Parts Cost | 3C Cost Information | |Month |Overhead Costs |Repair-Hours ( X1) |Parts ( X2) | |1 |$9,891 |248 |$1,065 | |2 |$9,244 |248 |$1,452 | |3 |$13,200 |480 |$3,500 | |4 |$10,555 |284 |$1,568 | |5 |$9,054 |200 |$1,544 | |6 |$10,662 |380 |$1,222 | |7 |$12,883 |568 |$2,986 | |8 |$10,345 |344 |$1,841 | |9 |$11,217 |448 |$1,654 | |10 |$13,269 |544 |$2,100 | |11 |$10,830 |340 |$1,245 | |12 |$12,607 |412 |$2,7 00 | |13 |$10,871 |384 |$2,200 | |14 |$12,816 |404 |$3,110 | |15 |$8,464 |212 |$ 752 | In multiple regressions, the Adjusted R Square is the correlation coefficient squared and adjusted for the number of independent variables used to make the estimate. Reading this output tells us that 89% of the changes in overhead costs can be explained by changes in repair-hours and the cost of parts. Remember 82. % of the changes in overhead costs were explained when one independent variable, repair-hours, was used to estimate the costs. Can you use this regression output to estimate overhead costs for 520 repair-hours and $3,500 cost of parts? Learning Objective Six: Identify potential problems with regression data. Implementation Problems It’s easy to be over confident when interpreting regression output. It all looks so official. But beware of some potential problems with regression data. We already discussed in earlier chapters that costs are curvilinear and cost estimations are only valid within the relevant range. Data may also include outliers and the relationships may be spurious. Let’s talk a bit about each. Curvilinear costs |Outliers |Spurious relations |Assumptions | 1. Curvilinear costs Problem: Attempting to fit a linear model to nonlinear data. Likely to occur near full-capacity. Solution: Define a more limited relevant range (example: from 25 – 75% capacity) or design a nonlinear model. If the cost function is curvilinear, then a linear model contains weaknesses. This generally occurs when the firm is at or near capacity. The leaner cost estimate understates the slope of the cost line in the ranges close capacity. This situation is shown in exhibit 5. 5. 2. Outliers Problem: Outlier moves the regression line. Solution: Prepare a scatter-graph, analyze the graph and eliminate highly unusual observations before running the regression. Because regression calculates the line that best fits the data points, observations that lie a significant distance away from the line could have an overwhelming effect on the regression estimate. Here we see the effect of one significant outlier. The computed regression line is a substantial distance from most of the points. The outlier moves the regression line. Please refer exhibit 5. 6. 3. Spurious or false relations Problem: Using too many variables in the regression. For example, using direct labor to explain materials costs. Although the association is very high, actually both are driven by output. Solution: Carefully analyze each variable and determine the relationship among all elements before using in the regression. 4. Assumptions Problem: If the assumptions in the regression are not satisfied then the regression is not reliable. Solution: No clear solution. Limit time to help assure costs behavior remains constant, yet this causes the model to be weaker due to less data. Learning Objective Seven: Evaluate the advantages and disadvantages of alternative cost estimation methods. Statistical Cost Estimation Advantages 1. Reliance on historical data is relatively inexpensive. 2. Computational tools allow for more data to be used than for non-statistical methods. Disadvantages 1. Reliance on historical data may be the only readily available, cost-effective basis for estimating costs. 2. Analysts must be alert to cost-activity changes. Choosing an Estimation Method Each cost estimation method can yield a different estimate of the costs that are likely to result from a particular management decision. This underscores the advantage of using more than one method to arrive at a final estimate. Which method is the best? Management must weigh the cost-benefit related to each method (Lanen, 2008). Estimated manufacturing overhead with 520 repair-hours and $3,500 parts costs *. The more sophisticated methods yield more accurate cost estimates than the simple methods. |Account Analysis = $12,586 |High-Low = $12,384 |Regression= $12,982 |Multiple Regression= $13,588* | Data Problems Missing data Outliers Allocated and discretionary costs Inflation Mismatched time periods No matter what method is used to estimate costs, the results are only as good as the data used. Collecting appropriate data is complicated by missing data, outliers, allocated and discretionary costs, inflation and mismatched time periods. Learning Objective Eight: (Appendix A) Use Microsoft Excel to perform a regression analysis. Appendix A: Microsoft as a Tool Many software programs exist to aid in performing regression analysis. In order to use Microsoft Excel, the Analysis Tool Pak must be installed. There are software packages that allow users to easily generate a regression analysis. The analyst must be well schooled in regression in order to determine the meaning of the output! Learning Objective Nine: (Appendix B) Understand the mathematical relationship describing the learning phenomenon. Learning Phenomenon Leaning phenomenon refers to the systematic relationship between the amount of experience in performing a task and the time required to perform it. The learning phenomenon means that the variable costs tend to decrease per unit as the volume increase. Example: | |Unit |Time to Produce |Calculation of Time | |First Unit |100 hours |(assumed) | |Second Unit |80 hours |(80 percent x 100 hours | |Fourth Unit |64 hours |(80 percent x 80 hours | |Eighth Unit |51. hours |(80 percent x 64 hours | |Impact: Causes the unit price to decrea se as production increases. This implies a nonlinear model. | Another element that can change the shape of the total cost curve is the notion of a learning phenomenon. As workers become more skilled they are able to produce more output per hour. This will impact the total cost curve since it leads to a lower per unit cost, the higher the output. Chapter 5: END!! COURSE WORK EXERCISE 5-25 – A B PROBLEM 5-47 -A B REFERENCES Lanen , N. W. , Anderson ,W. Sh. Maher ,W. M. ( 2008). Fundamentals of cost accounting. New York : McGraw-Hill Irwin. [pic]

Monday, December 2, 2019

The PC of the future Major developments in the hardware and software Essay Example

The PC of the future Major developments in the hardware and software Essay To the present computers only they have left two generations more to be able to continue being at the same time smaller and more powerful, the two generations that calculate that they allow the present technologies of miniaturization of its basic circuits. The perspective of not being able to maintain this tendency does not please anything to the physicists and computer science technicians, reason why, supported by the great companies of the sector, are looking for new approaches completely for the computers of the future. No of these approaches appears simple but all are suggestive, although to risk to imagine one of these computers molecular, quantum or from DNA is still premature. Whatever it buys a computer nowadays knows that it will be obsolete in a pair of years. Now we give by seated the inexorable increase of the power of the computers. But that cannot follow eternally thus, at least, if the computers continue being based on the present technologies. Gordon Moore, cofounder of Intel and one of gurus of the technology of the information, anticipate that the existing methods of miniaturization only will offer two generations more of computers before its capacity is exhausted. In 1965, Moore made a prediction that was confirmed with amazing precision in the three following decades: the power of the computers would duplicate every 18 months. This increase has been due mainly to the more and more small size of the electronic components, so that every time a microprocessor or chip can be introduced more of them in. A modern chip of only half square centimeter contains many million tiny electronic components like the transistors. Each one measures less than one micron of diameter, more or less the hundredth part of the thickness of a human hair. We will write a custom essay sample on The PC of the future Major developments in the hardware and software specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on The PC of the future Major developments in the hardware and software specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on The PC of the future Major developments in the hardware and software specifically for you FOR ONLY $16.38 $13.9/page Hire Writer These components are done basically of silicon, that the electricity leads, and of silicon dioxide, that is an insulator. In order to record cards of circuit in silicon microprocessors a called technique is used at the moment photolithograph, by means of which a polymer film forms on the layers of silicon or silicon dioxide that takes the scheme of the set of circuits. The pattern of the circuit records itself in the film of polymer exposing it to the light through a mask. Next chemical substances of engraving are applied that corrode the silicon material no protected. Limitation The size of the elements that can be created by means of this procedure is limited by the wavelength of the used light to fix the pattern. At the moment, they can get to only measure one-fifth part of one micron. But to create still more small electronic components up to one tenth part of one micron of diameter the manufacturers of microprocessors they will need to decide on a radiation of a shorter wavelength: the ultraviolet light of smaller length, x-rays or the electron beams of high energy. The great ones of the computers have still not been agreed on what class to choose, but, in any case, the costs of the development of the new technology and the later variation of the production process will be enormous. IBM, Motorola, Lucent Technologies and Lockheed Martin have been forced to collaborate in the development of the x-rays lithography. But the miniaturization is not limited solely by the photolithograph. Although can be devised methods to make transistors and other devices of a still smaller size, will continue working effectively? The law of Moore anticipates that, for year 2002, the smallest element of a silicon transistor, the insulator of the door, it will have a diameter of only 4 or 5 atoms. Will continue providing the necessary isolation this so fine layer? This question has been investigated recently by the physicist David Miller and his companions of Lucent Technologies. They used manufacture technologies outposts to obtain a silicon dioxide film of a thickness of 5 atoms that introduced between two silicon layers. In comparison, the commercial microprocessors have insulators of about 25 atoms of thickness. Miller and its companions discovered that its ultra thin insulating oxide no longer was able to isolate the silicon layers. The investigators calculated that an insulator of an inferior thickness to 4 atoms of wide would have so many losses that would be useless. In fact, due to the limitations to make smooth films, perfectly even insulating with the thickness double they would begin to break it if they made with the present methods. Therefore, the conventional silicon transistors will have reached their minimum operative dimensions in only one decade more or less. Many computer science technologists affirm that, at the moment, the silicon is what there is; but he can that what there is finishes soon. On the other hand, to try to imagine the computer of the future is to risk seeming as absurd as the science fiction of the Fifties. Nevertheless, judging by the present dreams of the technologists, we will be able to do without the plastic boxes and the silicon Chips. Some say that the computers will be looked more like organisms; their cables and switches will be compound of individual organic molecules. Others speak to practice computer science in a water bucket, sprinkled with fibers of DNA, the genetic material of the cells, or enriched with molecules that manipulate data like answer to the vibrations of radio waves. A thing seems safe: so that the computers have power more and more, their components, the basic elements of the logic circuits, will incredibly have to be tiny. If the present tendency to the miniaturization persists, these components will reach the size of individual molecules in less of a pair of decades, since we have seen. The scientists already are examining nanotubes called the carbon molecule use like cables of conventional molecular size that they can be used to connect component of silicon of solid state. The nanotubes of carbon can measure only a few millionth of millimeter, that is to say, few nanometers, that are equivalent to less than one tenth part of the diameter of cables smaller than they are possible to be recorded in the silicon Chips commercial. One is hollow pure carbon tubes, which are extremely strong and have the added attraction of which some of them lead the electricity. The scientists of the Stanford University in California have cultivated from nanotubes gas carbon methane that connect two terminals of electronic components. But the connection of cables is the easy part. Can the molecules process binary information? That is to say, they can combine sequences of bits (and zeros codified like electrical impulses in the present computers) like the doors logics composed of transistors and other devices of the silicon Chips? In a logic operation, some zeros and combinations in the entrance signals generate other combinations in the exit signals. This way, the data are compared, ordered, added, multiplied or manipulated of other forms. Individual molecules have carried out some operations logics, with the bits codified not like electrical impulses, but like impulses of light or other molecular components. For example, a molecule could unload a photon a luminous particle if it received a loaded metal atom and a photon of a different color, but not if it received only one of both. Nevertheless, nobody has a real idea of how connecting these molecules to a trustworthy and complex circuit that serves to calculate, an authentic molecular computer. Some detractors say that molecular computer science never will be viable. Calculations with DNA At the beginning of the Nineties, Leonard Adleman, of the University of California of the South, it proposed a form different to use molecules to calculate, and indicated that the data base of the own cell the DNA it is possible to be used to solve calculation problems. Adleman realized which the DNA basically a chain of four different molecular components or bases that act as a code of four letters of the genetic information is looked remarkably like the universal computer postulated in the Thirties by the mathematical genius Alan Turing, who stores binary information in a tape. Different chains from bases can voluntarily be programmed in synthetic DNA fibers using the techniques of the modern biotechnology; and later these fibers can be generated, be cut and be assembled in enormous amounts. Could be used these methods to convince to the DNA that it calculated like a machine of Turing? Adleman saw that the system of the DNA could be specially apt to solve minimization problems, like for example finding the route shortest to connect several cities. This kind of problems is one of which it more costs to them to solve to the conventional computers, since the number of possible routes increases very quickly as more cities are included. A current computer takes much in examining all those options. But if each possible solution is codified in a DNA fiber, the problem does not seem so terrible, because a simple one even picks of DNA contains many trillions of molecules. So that only it is necessary to separate the DNA fibers that they have codified the best solution. This can be done using biotechnological methods that recognize specific short sequences of the bases of a fiber of ADN. This procedure is not more than a slightly little orthodox form to find a solution: in the first place, to find all the solutions possible and later to use operations logics to choose the correct one. But, as everything happens parallelly all the possible solutions are created and examined to the same time the process it can be very fast. The calculation by DNA has been demonstrated in principle, but it has still not been proven that solves problems that a conventional computer cannot solve. It seems more apt for a quite specific set of problems, like the minimization and the codification that like method of calculation for questions of all type. The quantum world Already in the Sixties, some computer science scientists noticed themselves of where he took the miniaturization to them: towards the quantum kingdom, where the non-logical rules of the quantum mechanics govern the behavior of the matter. As the conventional devices of the circuits become smaller, the quantum effects become a more and more important aspect of their behavior. It could be feasible, were asked, turn this possible complication an advantage? This suggestion gave fruit in the Eighties, when the physicists began to observe kindly how he could operate a computer under the influence of the quantum mechanics. What they discovered was that it could win enormously in speed. The crucial difference between processing information in the quantum world and the classic one is that first he is not black and white. In a classic computer, all the bits of information are or a thing or another one: or a 1 or a 0. But a quantum bit, qubit, can be a mixture of both. The quantum objects can exist in a superposition of states that is classically exclusive, like the famous cat of Schrà ¯Ã‚ ¿Ã‚ ½dinger that is not nor alive, nor dead, but in a superposition of the two things. This means that a series of quantum switches objects in defined quantum states good, as atoms in different states from excitation have enough more configurations of qubits than the corresponding classic series of bits. For example, whereas a classic memory of three bits can store only one of the eight possible configurations of and zeros, the corresponding quantum series can store the eight, in a superposition of states. This multiplicity of states gives to the quantum computers enough more power and, therefore, enough more speed, than to its classic companions. But, in fact, to shape these ideas in a physical device supposes an extraordinary challenge. A quantum superposition of states is a thing very delicate, and difficult to maintain, mainly if it is extended by an enormous set of logical elements. Once this superposition begins to interact with its surroundings, it begins to collapse and the environs lose the quantum information. Some investigators think that this problem will return quantum computer science to great scale in which great amounts of data are manipulated in multitude of steps impossibly delicate and difficult to handle. But the problem has been lessen in the last years by the development of algorithms that will allow working to the quantum computers, in spite of the small errors introduced by this type of losses. MAJOR DEVELOPMENTS IN THE SOFTWARE Introduction Software Engineering is not a 100% science. All the algorithms are made after the logical, the political and the personal surroundings of the programmer. To talk about the future of software, we have to know a few historical facts. After that, we will have to choose our side of the software wars, between those who defend the open source code policy, and the close source policy. The Software wars Internet would not exist without free software. In the years the 60 Bell labs already yielded the source code of his just invented Operating system UNIX, and from that time last to the last version of the Linux nucleus, the history of software has been based on the exchange of information. The fundamental base of the revolution of the society-network is that interchange that is constructing the movement of the Open Code. A field of the technologies of the information and the communication is free software that surely does not have decrease problems. It is a movement that every time is become greater and than it has had in these last years an extraordinary advance. The statistics usually are eloquent. The last year a 50 percent of the software developers already had thought migrating their developments to Open Code. As powerful applications as the suite of computer science Star Office de Sun or the technology of servant in streaming of Real Networks have served like motor tractor of so many other known applications less than also they are being directed towards the free development of his code. The force of this revolution in computer science and the telecommunications is represented by values and a philosophy unknown until the moment. It is the force of the community and the work in group after resolving tasks and objectives that they acquire of by himself a special value in the developers, which are compensated of a no-pecuniary form that was unsuspected until now at the time in which already the protestant ethics has prevailed anywhere in the world western and the values of the work that takes prepared. Students of the technologies and their implications like M. Castells, R. Stallman, P. Himannen, L. Torvalds and Jesus G. Barahona speak to us constantly of the possibilities that open homo digitalis to him to the future reach more knowledge in thanks to the adoption of agreed policies with the founders of this movement based on sharing the code and the knowledge by the mutual good. The movement represented by the Free Software Foundation is something that goes beyond the mere election of policies of development of new Technologies of the Information and the Communication. When bet by the development in opened code, the adoption of standards and the support to free operating systems, is being affected the knowledge of the members of the digital society and not in the mere support to the consumption of computer science by the fact that it is acceded to his use, immediately. In a digital society the use is so important as the knowledge of the tools and the development of these since he is this indeed what gives to be able to the citizens and the organizations. With the adoption of computer science policies based on free software knowledge of networks and code also occurs to the users, with which they can take a fundamental paper from nonpassive actors in the digital revolution. But everything what represents east movement is not compatible with the policies of the great company that nowadays exerts the worldwide control of computer science (Microsoft). The company of the State of Washington is being with most of the souls of the users worldwide population of the Network and the tools of office software and operating systems of workstations. He is this something undeniable, like also it must be for the administrations the systems by which these companies are going to remove data from the users to create profiles and data bases that to knowing where they will finish someday. Being the one of Redmond (Microsoft) a company of a nation that prohibits the safe encriptation to 1024 bits for its subjects, how we are going to have the users of the planet confidence in the security policies that apparently are going to us to sell. And that is thus, although until the own Department of Defense of the U.S.A. trusts in the Open Source and its systems of encriptation, an d it uses itself them. On the contrary, the one that already has proven version XP of Microsoft well knows what is the control via network of the data of the user and its number MAC of computer. And before it, little people have left to fight against those policies. The networks of laboratories of hackers and other groupings of people who affect the education of the free software tools which they are based on the knowledge necessary to maintain servants, to publish without censorship, to develop programs, to give courses of computer science, etc. are an alternative that already is giving its fruits. Gurus of the digital era has full name that goes united to these movements in some stage of their life. The father of all this form to think is Richard Stallman and the most well known image is the one of Linux Torvalds, who not long ago occurred the prize him to the best European industrialist. Linux and Richard are the pieces key in all this revolution based on the freedom, the work in-group via network and the pure satisfaction by the made work. The competition which they can exert certain companies of little is going to be worth before this movement which it is essential for the technologies based on the side of the servant, and will have to ente r itself in him, since it has made IBM and Sun to begin to include/understand his potential and to remove benefit from it. To put a simile, we imagine that the community of doctors and medicine investigators worldwide worked in network sharing their knowledge at any moment of the day and received as it compensates the solution at the moment to all the problems that appeared to them. With this system many of the present diseases would be already for a long time eradicated. In addition, in this example, the professionals with great tied pays to policies of maximum secret in the laboratories of investigation little would have to say before the greater force than she acquires the movement developed by the Network and that the knowledge has its base in sharing. Those that the difference between languages and free development systems like PHP know about, ZOPE, Perl, etc know well until where it is possible to be arrived with the free code. However, those that only know proprietary and clo sed technologies hardly will be able to get to watch the future, since they go to a technological slavery. Conclusion Computer science is a complex science but of which people create. Much people do not know that it is a science with two branches different one from the other, but employees. The architecture of the computer and software to be able to use it are very important. But all their possible uses are so many, that specialists are needed, like in the medicine, of each one of their parts. There per 1947, when the transistor was invented, and when Jaquard (1804) designed a loom that performed predefined tasks through feeding punched cards into a reading contraption; nobody imagined how quickly that it would take to get the nowadays supercomputers.

Wednesday, November 27, 2019

Free Essays on Drivers License Age Be Raised

Drivers license age be raised Should the age to receive a driver’s license be raised and, if not, should graduated licensing be instituted? This is a growing question across America as well as other countries around our globe. The percentage of teenage accidents involving automobiles is on a constant rise. Whether caused by the lack of experience or under the influence of alcohol, death has become all too common among teen motorists. This problem is not going to go away by itself; action needs to be taken. The state must raise the age requirement to receive a license or institute graduated licensing because teens are not mature enough to handle the dangerous responsibilities of driving. We allow teens to get their licenses at an earlier age than in most countries, and little driving experience typically is required before licenses are issued. This is not very smart on our part considering that according to the National Highway Traffic Safety Administration, 16 year olds have the highest percentage of cras hes-involving speeding, the highest percentage of single vehicle crashes, the highest percentage of crashes with driver error, and the highest vehicle occupancy (NHTSA ). Compared with older drivers, teenagers as a group are more willing to take risks and less likely to use safety belts. Many experts blame the young teens immaturity, impulsiveness, and lack of proper training and experience as contributing factors to the high rate of teen involved accidents. Teens don’t need to be victims of there driving inexperience. During 1975-96 the death rate among 16 year-old drivers was trending upward. The rate increased from 19 per 100,000 in 1975 to 35 per 100,000 in 1996, and this increase occurred in both males and females. The number of 16 year old driver deaths increased about 50 percent during 1975-96 (from 362 to 547 annually) while deaths among 17-19 year olds declined 27 percent (CNN ). â€Å" Any way you look at it, 16 year... Free Essays on Drivers License Age Be Raised Free Essays on Drivers License Age Be Raised Drivers license age be raised Should the age to receive a driver’s license be raised and, if not, should graduated licensing be instituted? This is a growing question across America as well as other countries around our globe. The percentage of teenage accidents involving automobiles is on a constant rise. Whether caused by the lack of experience or under the influence of alcohol, death has become all too common among teen motorists. This problem is not going to go away by itself; action needs to be taken. The state must raise the age requirement to receive a license or institute graduated licensing because teens are not mature enough to handle the dangerous responsibilities of driving. We allow teens to get their licenses at an earlier age than in most countries, and little driving experience typically is required before licenses are issued. This is not very smart on our part considering that according to the National Highway Traffic Safety Administration, 16 year olds have the highest percentage of cras hes-involving speeding, the highest percentage of single vehicle crashes, the highest percentage of crashes with driver error, and the highest vehicle occupancy (NHTSA ). Compared with older drivers, teenagers as a group are more willing to take risks and less likely to use safety belts. Many experts blame the young teens immaturity, impulsiveness, and lack of proper training and experience as contributing factors to the high rate of teen involved accidents. Teens don’t need to be victims of there driving inexperience. During 1975-96 the death rate among 16 year-old drivers was trending upward. The rate increased from 19 per 100,000 in 1975 to 35 per 100,000 in 1996, and this increase occurred in both males and females. The number of 16 year old driver deaths increased about 50 percent during 1975-96 (from 362 to 547 annually) while deaths among 17-19 year olds declined 27 percent (CNN ). â€Å" Any way you look at it, 16 year...

Saturday, November 23, 2019

Lewis Acid-Base Reaction Definition and Examples

Lewis Acids A Lewis acid-base reaction is a chemical reaction that forms at least one covalent bond between an electron pair donor (Lewis base) and an electron pair acceptor (Lewis acid). The general form of a Lewis acid-base reaction is: A B- → A-B where A is an electron acceptor or Lewis acid, B- is an electron donor or Lewis base, and A-B is a coordinate covalent compound. Significance of Lewis Acid-Base Reactions Most of the time, chemists apply the  Brà ¸nsted  acid-base  theory (Brà ¸nsted-Lowry) in which acids act as proton donors and bases are proton acceptors. While this works well for many chemical reactions, it doesnt always work, particularly when applied to reactions involving gases and solids. The Lewis theory focuses on electrons rather than proton transfer, allowing for prediction of many more acid-base reactions. Example Lewis Acid-Base Reaction While  Brà ¸nsted theory cannot explain the formation of complex ions with a central metal ion, Lewis acid-base theory sees the metal as the Lewis Acid and the ligand of the coordination compound as a Lewis Base. Al3 6H2O â‡Å' [Al(H2O)6]3 The aluminum metal ion has an unfilled valence shell, so it acts as an electron acceptor or Lewis acid. Water has lone pair electrons, so it can donate electrons to serve as the anion or Lewis base.

Thursday, November 21, 2019

The eating habits of students Essay Example | Topics and Well Written Essays - 1500 words

The eating habits of students - Essay Example The young generation seems to pay little attention to the crucial topic on diet and health. Their choice of diet remains a secondary consideration in their responsibility over their health (Gullotta, Adams & Ramos 2005). Dietary disorders among young people are on the rise at an alarming rate. Instances of obesity in young people are increasing more than ever experienced. According to Richards, 2007, guiding the young generation on issues of diet and health become essential. Topics such as the choice of food, awareness of a good balanced diet and the need to create attention to nutrition among by young people need further elaboration. Teaching the youth on diet and health, therefore, is indispensably an issue most governments must consider engaging (Berg 2002). Statistics in the United Kingdom show that several school going students have poor knowledge on diet and health (McGinnis & Gootman 2006). Several students express low attention to nutrition. The choice of a better diet by students is poorly in several students (Glass 2009). In what areas many students pay less attention to, as far as diet and health is concerned, this report sorts to find out. The report discusses the important dietary areas overlooked by students and their extent of caution to their health. In order to find out the eating habits of students, a survey was conducted with the help of an oral questionnaire. Total 30 randomly selected students in the UK took the interview in different learning institutions for the purpose of representing the student population. The group of respondents composed of students from different learning institutions at different stages of studies. The respondents consisted of fifteen boys and fifteen girls. The questionnaire (attached in the appendix) consists of different kinds of questions which chiefly build upon each other. The question types used is, for example, â€Å"yes/no†, multiple choice

Tuesday, November 19, 2019

African Americans Term Paper Example | Topics and Well Written Essays - 1000 words

African Americans - Term Paper Example Harriet Tubman was a strong African-American woman born in servitude. Both of her parents toiled as slaves in Maryland. During her development, she endured a harsh life as she was subjected to whipping from a tender age. When she entered into adulthood, she realized that she could be sold as a slave as it was a norm at that time (Siebert, & Hart, 2006). Therefore, in 1849 she decided to escape to Philadelphia with the assistance of an abolitionist. During that period, there were few white people who were against slavery and they helped to free slaves. Harriet was handed a note by her abolitionist neighbor that contained two names that would direct her to a safe place. Harriet was joined by other blacks and went to Subversive Railroad, which were houses. After her escape, she met with other abolitionists whom they planned on how to free other slaves who were left behind. Due to her resilient in fighting slavery, Harriet became the leading abolitionist before the civil war (Lillian, n.d.). The particular event by Harriet was significant in America as it showed her resilience to free other slaves. Her character also gave hope to slaves as they felt that they had a person to fight for them and also keep them safe. In 1877, America witnessed the first person of color ever to graduate in the military academy in the country’s history. Henry Ossian, who was born in slavery, made history by being the first person of color to graduate from West point. During his years in the academy, he never had any contact with a white cadet. The Academy was the preliminary military school to be established in the US. The primary reason the military school was built was to educate and train young men theory and practice of military science. Before Henry was admitted to the military school, there was another black American named James Webster. Although, James was admitted to the military school he never graduated. However, Henry endured all

Sunday, November 17, 2019

Sorry for the loss Essay Example for Free

Sorry for the loss Essay While a butterfly is free to spread its beautiful wings, many people suffer in captivity, and can only dream about the world outside. The yearning for freedom is depicted in Bridget Keehan’s short story; ‘Sorry for the Loss’ from 2008, where we meet the chaplain Evie and the young criminal Victor. The story begins when Evie has to tell Victor that his Nan is dead, but the situation turns out different than expected. Evie is a chaplain who has worked in the prison for over a year (p. 1 l. 18), but she doesn’t really like being there. The atmosphere in the prison intimidates her and she feels uncomfortable being there because of all the noises. That’s why she treasures whenever the prisoners are out, and she has some quiet time on her own. She is very religious and she likes to use her quiet time to meditate and pray (p. 2, l. 32). She is a good girl who behaves properly and follows the Bible. Even though the prisoners have done bad things, she is kind to everyone, and tries to understand how the prisoners feel. She even tries to imagine Jesus as being one of the prisoners (p. 2, l.  40), and this just shows that she is very good at putting herself in other people’s shoes. In the prison she also helps to run the Enhanced Thinking Skills (p. 3, l. 91). She is a kind, genuine person, and she is very nervous when she has to tell Victor that his Nan is dead, because she is scared that he’ll get upset (p. 2, l. 55). Evie is fragile, but she is also a very loving and caring person, and as soon as she sees the young Victor, she imagines him being her son (p. 3, l. 75). Victor is very young, so her loving heart immediately feels sorry for him. Victor is described as a young, good-looking boy (p. 3, l. 75). He has olive skin, sparkling eyes and a big, white smile with a glint of gold filling (p. 4, l. 136). He is a catholic, but he’s not very practicing. Instead he likes to explore new things and religions. He has been in prison for five years (p. 3, l. 78), but although he has been there for a long time, he is different than the other prisoners. He has a more of a kind look to him, and he certainly doesn’t look like a boy who would hurt, let alone, kill someone. While the other prisoners’ cells are filled with family photographs or pictures of women, Victor’s cell is completely empty (p. 4, l. 114). He seems quite immature, but even though he seems young and not clever, he has spent a lot of his time in prison studying; ‘Yeah I know ETS. Done it in my last nick’ (p. 3, l. 90). He is also a part of the book club, and he even refers to the tragedy; ‘King Lear’ by Shakespeare when he talks to Evie. Though, he has a quite interesting interpretation of the Shakespeare tragedy, because he imagines Cordelia as being a stoned pot-head (p.  3, l. 110). He seems like a very kindhearted person, and he behaves well when Evie visits him. He shows emotions for the pigeons outside his window, but he doesn’t seem to care about his Nan’s death, and this is the first sign, the reader gets, which shows that the genuine Victor may not be as genuine after all. The story is told by a 3. person omniscient nar rator, but we hear the story from Evie’s point of view. Her thoughts are often described; ‘Eve considers, it’s a wonder the thick stone walls that separate this world from the one outside contain the noise’ (p. 1, l.  28), so it’s almost like the story is told by Evie herself. The narrator doesn’t comment upon the text, which also makes it feel like we hear the story through Evie and her thoughts. There is a great use of figurative langue, which makes the text come alive, since the narrator uses sentences such as; ‘Bellowed from the testosterone voices that have been trained like tenors to reach the gods’ (p. 1, l. 23) and; ‘The office, bulkily built like a ruby player’ (p. 2, l. 62). The characters, especially Victor, are also described very detailed, which makes the reader feel like we almost know the characters in person. Through the narrative technique we get an idea of who the characters are. For example through the use of direct speech – this shows how some of the characters are well-educated, while others aren’t. Evie, for example, has correct grammar when she speaks, which indicates that she is well-educated. Victor, on the other hand, has bad grammar; ’No I’m safe ta, would you? ’ (p. 3, l. 93), ‘Done it in my last nick’. (p. 3, l. 91) and; ‘but that’s evil innit? ’ (p. 4, l. 132), so it’s obvious that he spend most of his life in prison instead of attending school. The narrator also uses symbols in the story. One of the symbols is the pigeons that live close to Victor’s window. A pigeon is a bird and a symbol of freedom, but in the story, Victor’s ‘neighbor’ treats the pigeons very badly. ‘.. he feeds the pigeons crumbs so they get to trust him, then he catches one and traps it’ (p. 4,l. 128). This shows the fragility of freedom, and the prisoners know, more than anyone, that freedom can be taken away from you in the blink of an eye. The window is also used as a symbol for the prisoners’ dream about freedom, because when they look outside the window they see; ‘‘a slice of road leading out of town’ (p.  2, l. 53). A window is an object which allows you to look outside and see different parts of the world, and that’s exactly what the prisoners do – they look outside and dream about a life on the other side of the bars. One of the main symbols, though, is the butterfly knife. The butterfly knife symbolizes Vict or, and it shows how beauty can hide something cruel. What you thought was pretty and genuine may end up causing great damage. That is what the whole story is about, and that is exactly what the butterfly knife symbolizes. The author Bridget Keehan has used many contrasts in her short story. One of the main contrasts is the contrast between the prisoners and the life outside the prison. The prisoners are trapped in the prison and they have no freedom. That’s why the prisoners always stand by the big window where they can have a view on the world outside. The contrast between free and captured is also shown through the office workers on the street. When the prisoners look outside the window, they can see the office workers on their way to work. The office workers are free men who have jobs and lives, while the prisoners don’t really have any purposes in their lives, since they are trapped behind the bars. The prisoners can only look at the office workers with envious eyes (p. 2, l. 50). The outside vs. inside world is also depicted, since the prison is described as something non-beautiful; ‘.. with its banging of gates and scraping of keys in locks and the clatter of each prisoner’s metal food tray’ (p. 1, l. 22), while nature outside is describe as beautiful; ‘It’s a bright, blue-sky day, and as the sun streams in from the large solitary window and warms her face’ (p. 2, l. 35). Another contrast is between Evie and the environment of the prison. Evie is very religious, and she follows the rules. She is a good girl and has never tried heroin (p. 2, l. 38), or done anything bad. Evie is described as a very fragile and feminine person, which is completely opposite to the prison’s harsh environment. The prison is described as something that’s very loud and cold, and it is surrounded by thick stone walls. Besides that, the prison is full of big men and ‘testosterone voices’ (p. 1, l. 25), so Evie’s gentle and feminine character doesn’t really fit in. Evie is also a contrast to the prisoners, since Evie follow God’s rules, while many of the prisoners have committed murder or rape etc. which is completely against the catholic believes. One of the most special contrasts in the story is Victor. Victor is a contrast himself, because his outer beauty camouflages his inner murderer. In the beginning, the reader almost feels sorry for Victor, because he seems so genuine, but once the officer tells Evie that Victor is a murderer, we realize that it’s just a facade. Victor is a contrast, because he is both good and bad, and that’s why the butterfly knife symbolizes him – it looks beautiful and harmless, but it can cause extreme damages. The main theme in this short story is the yearning for freedom, but the text also depicts the question about trust and sincerity. It puts focus on the fact that everyone has their own secrets, whether it shows or not. The text is quite relevant today, because we live in a world full of crime, and the prisons are filled with people who have done something bad. It makes us wonder – do we take freedom for granted? Bridget Keehan’s; ‘Sorry for the Loss’ tells a fascinating story about the meeting between freedom and captivity, and with her use of symbols and contrasts, she makes it clear that even beautiful things have dark sides.