(1) You are required to analyse each of the areas of statistics relating to business decision making explaining in detail what each of them do.
? Descriptive Measures
? Sampling Distributions
? Linear Regression
? Time Series Forecasting
? Index Numbers
(2) Additionally you are to explain what advantages and disadvantages each of them have if they are attempted to be used by a finance department to forecast the sales budget for the coming year.
(3) After explaining the relevant advantages and disadvantages you are then required to identify (with reasons) which technique is most appropriate to undertake the task proposed above by the finance department.
(4) Using the technique you have identified as being most appropriate you are then required to analyse AMP given in example of Thomson ONE Banker Analytics website .
** See ATTACHED file(s) for complete details **© BrainMass Inc. brainmass.com December 15, 2020, 11:34 am ad1c9bdddf
Descriptive Measures. Business Decision makers use descriptive measures to assess performance . For descriptive measures, progress is reported through narrative accounts outlining specific actions taken, in addition to any results attributed to those actions. Descriptive measures: Four types of characteristics which describe a data set pertaining to some numerical variable or phenomenon of interest are:
· Relative standing
In any analysis and/or interpretation of numerical data, a variety of descriptive measures representing the properties of location, variation, relative standing and shape may be used to extract and summarize the salient features of the data set.
If these descriptive measures are computed from a sample of data they are called statistics. In contrast, if these descriptive measures are computed from an entire population of data, they are called parameters.
· do not involve an objective review time standard
· do not have a quantifiable measure of successful performance, and
· do not specify the time frame within which it must be completed.
o Salespeople are not trained forecasters
o Salespeople focus on the present. They do not anticipate environmental change
o Salespeople may be too optimistic
o They may go low so that they have an easier time hitting quota
o Takes time away from selling
o The salespeople may not be that interested. They could do a real sloppy job
o Opinion based not data (fact) based
o Takes executives away from their jobs
o People with no marketing knowledge, like accountants that fail Marketing 370, are making market forecasts
o Hard to break down to territories
Hard to break down for tasks
o Salespeople know the actual sales potential in their territories
o Salespeople are closest to the source.
o Salespeople accept the forecast because they did it
o Put responsibility for forecasting in the hands of those that can make it happen
o Statistical and technical errors are minimized
o Detailed final forecast is done by product, customer, market
o Can be done with little or no data or history
o Easy, quick, not much math
o Opinions from all over the firm are integrated
o Usually inexpensive
Denote probability with a "p" so that the probability of an event x is simply p(x). If we are considering a conditional probability of x, conditioned on event y, then denote that as p(x|y).
There are many different kinds of probability. The textbook example is derived from some inherent property of the system producing the event; an example is tossing a coin. Neglecting the quite unlikely outcome of the coin landing on its edge, this clearly is a dichotomous event: the coin lands either heads up or tails up. Assuming an unbiased coin, the probability of either a head or a tail is obviously 50 percent. Each time we toss the coin, the probability of either outcome is always 50 percent, no matter how many times the coin is tossed. If we have had a string of 10 heads, the probability of another head is still 50 percent with the next toss. Now the frequency of any given sequence of outcomes can vary, depending on the the particular sequence, but if we are only concerned with a particular toss, the probability stays at 50 percent. This underscores the fact that there are well-defined laws for manipulating probability that allow one to work out such things as the probability of a particular sequence of coin toss outcomes. These laws of probability can be found in virtually any textbook on the subject. Outcomes can be polychotomous, of course; in the case of tossing a fair die, the probability of any particular face of the die being on top is clearly 1/6=16.6666 .... percent. And so on. This classic concept of probability arises inherently from the system being considered. It should be just as obvious that this does not apply to meteorological forecasting probabilities. We are not dealing with geometric idealizations when we look at real sales systems and processes.
Another form of probability is associated with the notion of the frequency of occurrence of events. We can return to the coin-tossing example to illustrate this. If a real coin is tossed, we can collect data about such things as the frequency with which heads and tails occur, or the frequency of particular sequences of heads and tails. We believe that if we throw a fair coin enough times, the observed frequency should tend to 50 percent heads or tails, at least in the limit as the sample size becomes large. Further, we would expect a sequence having a string of 10 heads to be much less likely than some combination of heads and tails. Is this the sort of concept we employ in sales forecasting probabilities? We don't believe so, in general. Although we certainly make use of analogs in forecasting, each sales system is basically different to a greater or lesser extent from every other sales system. Is the sales along each territory the same as the sales along every other territory? Not likely! Therefore, if a sales system looks similar to another one we've experienced in the past, we might think that the sales would evolve similarly, but only to a point. It would be extremely unlikely that exactly the same sales would unfold, down to the tiniest detail. In fact, this uncertainty was instrumental in the development of the ideas of "chaos" by Ed Lorenz
An important property of probability forecasts is that single forecasts using probability have no clear sense of "right" and "wrong." That is, if it rains on a 10 percent PoP forecast, is that forecast right or wrong? Intuitively, one suspects that having it rain on a 90 percent PoP is in some sense "more right" than having it rain on a 10 percent forecast. However, this aspect of probability forecasting is only one aspect of the assessment of the performance of the forecasts. In fact, the use of probabilities precludes such a simple assessment of performance as the notion of "right vs. wrong" implies. This is a price we pay for the added flexibility and information content of using probability forecasts. Thus, the fact that on any given forecast day, two forecasters arrive at different subjective probabilities from the same data doesn't mean that one is right and the other wrong! It simply means that one is more certain of the event than the other. All this does is quantify the differences between the forecasters.
Sampling Distributions If you compute the mean of a sample of 10 numbers, the value you obtain will not equal the population mean exactly; by chance it will be a little bit higher or a little bit lower. If you sampled sets of 10 numbers over and over again (computing the mean for each set), you would find that some sample means come much closer to the population mean than others. Some would be higher than the population mean and some would be lower. Imagine sampling 10 numbers and computing the mean over and over again, say about 1,000 times, and then constructing a relative frequency distribution of those 1,000 means. This distribution of means is a very good approximation to the sampling distribution of the mean. The sampling distribution of the mean is a theoretical distribution that is approached as the number of samples in the relative frequency distribution increases. With 1,000 samples, the relative frequency distribution is quite close; with 10,000 it is even closer. As the number of samples approaches infinity, the relative frequency distribution approaches the sampling distribution. The sampling distribution of the mean for a sample size of 10 was just an example; there is a different sampling distribution for other sample sizes. Also, keep in mind that the relative frequency distribution approaches a sampling distribution as the number of samples increases, not as the sample size increases since there is a different sampling distribution for each sample size. A sampling distribution can also be defined as the relative frequency distribution that would be obtained if all possible samples of a particular sample size were taken. For example, the sampling distribution of the mean for a sample size of 10 would be constructed by computing the mean for each of the possible ways in which 10 scores could be sampled from the population and creating a relative frequency distribution of these means. Although these two definitions may seem different, they are actually the same: Both procedures produce exactly the same sampling distribution. Statistics other than the mean have sampling distributions too. The sampling distribution of the median is the distribution that would result if the median instead of the mean were computed in each sample.
1. Sampling error cannot be calculated. Thus, the minimum required sample size cannot be calculated which suggests that the researcher may sample too few or too many members of the population of interest.
2. The researcher does not know the degree to which the sample is representative of the population from which it was drawn.
3. The research results cannot be projected (generalized) to the total population of interest with any degree of confidence.
In statistics, linear regression is a method of estimating the conditional expected value of one variable y given the values of some other variable or ...
Here is just a sample of what you'll find in this solution:
"In any analysis and/or interpretation of numerical data, a variety of descriptive measures representing the properties of location, variation, relative standing and shape may be used to extract and summarize the salient features of the data set."