New Series of Time Series: Power BI Custom Visual (Part 6)

upperlevel0.95

In the last post, I have explained how to do time series forecast using “Exponential Smoothing” approach in Power BI. I start to explain the main parameters that we need to set up. The main concepts behind of most of these parameters have been explained in previous posts (Post 4, Post 3, Post 2, and Post 1).

In this post I am going to talk about the main parameters we need for doing forecast using “Exponential Smoothing”.

the other parameters that I did not explain in the last post was “Trend with Damping”

Trend with Damping

Dampening trend can be helpful when we have uncertain and complex long term forecasting that the forecast may lead to unaccurate result. So for enhancing the forecasting and reducing the error, we use the dampening Trend approach. So if you going to forecast for a long date forecasting it is better to this parameter to true, or you just put it as Automatic.

trendwithdamping

The other parameters is “Error Component”

Error Component

if you remember from decompose chart (below picture), we have three main components in a time series,: trend, seasonal and random (error).

 

decompose

In the last post, for explaining the “Additive” and “Multiplicative” , you see that we have three main elements :Seasonality, Trend and Residual (errors)

In Power BI for all of these elements we able to identify their impact as Additive or Multiplicative (see the below image)

threemaincomponents

The other Parameters that need to be set up is “Target Seasonal Factor”.

Target Seasonal Factor

The data that we want to forecast may be related to a yearly trend like annual sales, or it may be related to hourly changes like the temperature that is collected from a sensor. The collected data can be varried from yearly to hourly range. This parameter help us to specify the date measure from Hour to year.

targetseasonal

Finally, in the “Confidence Interval”, we have two more parameters, that would be nice to set them up.

Confidence

Confidence intervals provides an upper and lower expectation for the real observation. These can be useful for assessing the range of real possible outcomes for a prediction and for better understanding the skill of the model[1]
The value of prediction intervals is that they express the uncertainty in the forecasts.  If we only produce point forecasts, there is no way of telling how accurate the forecasts are. But if we also produce prediction intervals, then it is clear how much uncertainty is associated with each forecast[2].
The confidence interval is the statistical percentage of certainty you want. When the confidence interval is 80% we actually say that if we repeat the forecast for different samples, 80% of the times the found values must be in this confidence interval.The upper interval factor is used to calculate the upper confidence (confidence interval + (100 – confidence interval) * upper interval factor)[3].

So here we able to spesify the confidence precentage from 0 to 100. for instance fro confidence of 90%, we able to see for 90% of data how much the forecast can be varied for the top point (925) in upper range from 968 (darker area) to 1085 (ligher) bound

 

confidece.9

If I change the the confidence to 50 % you see the value for upper and lower level will change in certeanly and in forecast interval.

confidence0.5

Upper Interval Factor

the other approach to do if then analysis is to spesify the upper level

upperlevel0.95

So if I change it to 50% the upper and lower level are the same as below

upperlevel0.5

[1]https://machinelearningmastery.com/time-series-forecast-uncertainty-using-confidence-intervals-python/

[2] https://www.otexts.org/fpp/2/7

[3]http://www.dynamicinfo.nl/forecasting-with-power-bi/

Leila Etaati on LinkedinLeila Etaati on TwitterLeila Etaati on Youtube
Leila Etaati
Trainer, Consultant, Mentor
Leila is the first Microsoft AI MVP in New Zealand and Australia, She has Ph.D. in Information System from the University Of Auckland. She is the Co-director and data scientist in RADACAD Company with more than 100 clients in around the world. She is the co-organizer of Microsoft Business Intelligence and Power BI Use group (meetup) in Auckland with more than 1200 members, She is the co-organizer of three main conferences in Auckland: SQL Saturday Auckland (2015 till now) with more than 400 registrations, Difinity (2017 till now) with more than 200 registrations and Global AI Bootcamp 2018. She is a Data Scientist, BI Consultant, Trainer, and Speaker. She is a well-known International Speakers to many conferences such as Microsoft ignite, SQL pass, Data Platform Summit, SQL Saturday, Power BI world Tour and so forth in Europe, USA, Asia, Australia, and New Zealand. She has over ten years’ experience working with databases and software systems. She was involved in many large-scale projects for big-sized companies. She also AI and Data Platform Microsoft MVP. Leila is an active Technical Microsoft AI blogger for RADACAD.

Leave a Reply