Forecast the Demand of New Products (aka Cold Start Modeling)

Retail Marketing / Sales Operations Improve Customer Experience Increase Revenue Augmentation Demand Forecasting End to End Time Series
Forecasting the demand of to-be-launched products enables retailers to optimize inventory, logistics, and working capital, and be better prepared to serve their customers.
Build with Free Trial

Overview

Business Problem

Having the right products in the right place at the right time has always been one of the most challenging aspects of retail. This is especially true for new products that a retailer is planning to launch. If a store prepares too much inventory for the new product, oftentimes it needs to be heavily discounted later on to sell it, resulting in lower profit margins or possibly selling that merchandise at a loss. This also results in a poor use of working capital, by purchasing more inventory than needed. On the other hand when a store doesn’t prepare enough inventory, it will result in potential lost sales and customer frustration.

Current solutions to try and solve this problem often involve analyzing historical sales of existing products at aggregated levels, such as department or category, and evenly extrapolating it to the new products. As each new product is unique, this method fails to capture the nuances and interactions hidden in the properties of the products, such as package size, color, display location, etc., which are oftentimes critical factors to its demand.

Intelligent Solution

Retailers can leverage AI to forecast new product sales by using the historical sales and intrinsic features (such as package size, color, and display location at the store) of existing products. They can also use insights produced by the AI, such as the most important product attributes that drive the sales, to guide future product development. This information can aid your merchandisers and category managers understand which products and their attributes are more likely to lead to higher sales when launched during specific periods of time.

Value Estimation

How would I measure ROI for my use case? 

It is hard to estimate the full impact of a better forecast since sales and operation planning is the backbone of many retailers. One framework to calculate the value of AI in this use case is an estimation that uses the following key data points:

  1. Forecast produced by both the existing model (e.g., rule-based) and AI of a set of once-new products over a certain period of time. 
  2. Actual demand of the same set of products over the same time period. 
  3. Total cost associated with stockout (lost sales) and overstock (lower margin and/or selling at cost) of the same set of products over the same time period.

Using 1) and 2), we can calculate the forecast accuracy of both the existing model and the AI model, using an accuracy metric such as MAE (Mean Absolute Error). Subsequently, we can calculate the accuracy gain in % of the AI model. The value of AI is then estimated by multiplying the accuracy gain with the total cost number in 3).

Technical Implementation

About the Data

For illustrative purposes, this tutorial uses synthetic datasets we developed by replicating the data of typical fashion retailers. Assuming today is March 25, 2018, a fashion retailer is planning to launch 50 new products on June 24, 2018 (3 months away) and is keen to forecast their first 7 days of sales in order to plan and optimize the inventory, supply chain, and working capital. 

Problem Framing

The training dataset contains daily sales and features of 400 existing products, from April 1, 2017 to March 24, 2018. The highlighted column ‘sales_qty’ (the unit sales of a product on that date) is the target variable we want to predict. 

Features of products include product category, price, discount, rack position, etc. It is important to note that the product features are all known-in-advance features: we know the value of the feature for any given date in the future. As counterexamples, weather and number of customers visiting the store are NOT known-in-advance features because we do not know their values 3 months from now. The reason we make such distinction is that our goal is to forecast the sales of new products to be launched in the future, and therefore we should only include attributes we know for sure in the future.

You may also notice that in the training dataset, we do not have a column that uniquely identifies the product (aka SKU). This is intentional and important: when training the AI, we do not want it to exploit using the SKU as a predictor because the SKUs of the new products we attempt to forecast are surely going to be different from the existing ones.

Note that other than existing features of the products, it is possible to derive additional features by the practitioners, aka Feature Engineering, to improve the accuracy of the model. For example, we can add the average sales quantity of the product category at the same date one year before as an additional column in the training data. Our experience shows that these time-lagged features are often able to provide accuracy lift. (For simplicity purposes, we are not including engineered features in our use case.)

The scoring dataset is very similar to the training dataset. It contains the same set of features for 50 new products. For each product, the date column goes from June 24, 2018 to June 30, 2018 (7 days). Note that it does not have the sales_qty target column, as we have no knowledge about it.

Sample Feature List
Feature NameData TypeDescriptionExample
store_codeCategoricalA unique identifier of the store‘Boston’, ‘f485656’
dateDateDate the sales occurred4/2/2017
product_categoryCategoricalCategory of the product‘Jacket’, ‘k711170’ 
bonus_dayBooleanWhether the customer earns double point on the dateTRUE
original_priceNumericThe original price of the product99.9
sales_priceNumericThe discounted price of the product on that date80.8
floor_codeCategoricalThe type of floor the product is/will be displayed on‘1st’, ‘street’, ‘o364924’
n_display_locationNumericThe number of locations the product is/will be displayed in the store2
Data Preparation 

To create the above datasets that will be used for modeling and scoring, oftentimes we need to join the needed tables from sales, marketing, infrastructure, etc. Once all the data is in one place, we need to ‘stack’ the data of different products on top of each other in a long format. Once the data is correctly structured, we can export it as a CSV file and drag-and-drop into DataRobot; or we can write it to a table or view, and then set up a connection to that table or view in DataRobot to pull in the data.

Model Training

DataRobot automates many parts of the modeling pipeline, so for the sake of this tutorial we will be more focused on the specific use case rather than the generic parts of the modeling process.

Interpret Results

Once the modeling process is complete, we can calculate Feature Impact under the Understand tab. This sensitivity analysis shows, at a macro level, which features have the greatest impact on driving the model’s decisions. In this case, it shows that product (product_category), display location (rack_position_number), and pricings (original_price, sales_price) are among the most impactful drivers of sales.

Feature impact

To examine the model’s decision making at a micro level, we can go to Prediction Explanations. This display shows the top factors for the model to make a single prediction at one row.

Prediction explanations

We can also look at the partial dependence plot to examine the marginal effect of a feature on the predicted outcome of the model. You can find it in Feature Effects under the Understand tab. For example, the partial dependence plot for the variable sales_price (in the following image) shows that, in general, unit sales are higher when the discounted price of the product is lower.

Feature effects

Since the relationship above is rather noisy and common sense tells us that sales should go up when prices go down, we can leverage the Monotonicity functionality of DataRobot to pass our ‘human knowledge’ to the AI, to help make the AI more accurate and/or explainable.  

To do this, re-upload your data into DataRobot, but now before running Autopilot we first let DataRobot know that we are applying monotonic decreasing.

First, once the data is uploaded, create a new feature list with original_price and sales_price.

Second, select the Show Advanced Options link and the Feature Constraints tab, and select the new feature list from the Monotonic Decreasing dropdown.

Feature constraints

After Autopilot finishes, you see the most accurate model has a Tweedie Deviance of 2.4028 and an RMSE (Root of Mean Squared Error) of 2.7406, comparing to a Tweedie Deviance of 2.4388 and an RMSE of 2.9142 for the most accurate model built without constraining monotonicity. This means that, by explicitly passing our human knowledge to the AI, it becomes more accurate! If we examine the partial dependence plot of a model on the Leaderboard that has a green badge Constrained by “monotonic decreasing”, we shall see that the sales decrease smoothly when the price increases. This is exactly the constraint we put in.

Evaluate Accuracy 

In analyzing the accuracy of our champion model compared to the other challengers, you can examine the models in the Leaderboard. The results are sorted in order of the Metric (Tweedie Deviance in this case), by Backtest 1

If we select the AVG Blender model and click the Evaluate tab, we can examine the Lift Chart and Accuracy Over Time. The closer the blue (predictions) and orange (actuals) curves are, the better the model. Since this is a synthetic dataset, there is a very close fit between the predictions and actuals. However, this likely won’t be the case with real data you ingest into the model. 

Lift chart
ML Model accuracy
Post-Processing

At this point, we have a model that forecasts the sales of a product on a given date, using a set of known-in-advance features. The model is trained using the history of existing products. Now, it is time for us to forecast the sales of our new products. For convenience, we will use the model with the Recommended for Deployment badge, which is automatically retrained from the most accurate, non-blender model using the most recent data. 

Select the Predict tab, the click Make Predictions and drag in the scoring file. (As mentioned earlier, the scoring file is similar to the training file. It contains the same set of features of the new products and does not have the target column.) After the upload finishes, we click Compute Predictions, and download the predictions as a CSV file when the computation is complete. That’s it. You now have the 7-day sales forecast for some brand new products without using any sales history!

Make predictions tab - DataRobot

Note that there are other ways to make predictions out of a DataRobot model, such as using the prediction API, as explained in the DataRobot documentation.

Business Implementation

Decision Environment

After you are able to find the right model that best learns patterns in your data, DataRobot makes it easy to deploy the model into your desired decision environment. Decision environments are the ways in which the predictions generated by the model will be consumed by the appropriate stakeholders in your organization, and how these stakeholders will make decisions using the predictions to impact the overall process. 

Decision Maturity 

Automation | Augmentation | Blend 

Inventory planners and logistics analysts will leverage the forecasts, together with their domain expertise and various other factors, to make decisions on the inventory and supply chain planning for the new products.

Model Deployment

The predictions can be made by simple drag-and-drop of Excel spreadsheets. Practitioners can also use DataRobot’s Prediction API to integrate the model predictions with an existing system, or leverage the write-back integration DataRobot has with some databases (MS SQL, Snowflake) to directly store the predictions into a table of the database. Predictions and prediction explanations can later be visualized in a BI dashboard, such as Tableau and Power BI. (Here’s a detailed walkthrough for Tableau integration.)

Decision Stakeholders

Decision Executors

Inventory planners, logistics analysts, finance analysts, merchandisers

Decision Managers

Inventory, logistics and finance managers

Decision Authors

Data scientists and business analysts

Decision Process

Merchandisers can understand which new products they should launch. Accurate sales forecast of to-be-launched products enables inventory planners and logistic analysts to optimize inventory and supply chain, and eventually reduces overstock and stockout. Finance analysts can also leverage the forecast to optimize working capital, and set the retailer on a stronger balance sheet.

Model Monitoring 

If the retailer practices seasonal new product launch in batch, we recommend retraining the model each time before making the batch prediction. If the retailer frequently launches new products throughout the year, we recommend using the Model Management (MLOPs) module provided by DataRobot to monitor data drift and accuracy of the model, and retrain the model using more recent data when data drift and/or model accuracy passes a certain threshold.

Implementation Risks

Real-world turbulence could significantly impact the performance of the model. Carefully choosing a sensible training dataset may help mitigate the risk. Using the 2020 Covid-19 pandemic as an example: if the goal is to forecast the sales of a set of new products that will be launched before the pandemic ends, ideally we should use a short training dataset that covers sales behavior only during the pandemic (i.e., March 2020 onwards). This is to ensure the maximum consistency of intrinsic patterns between the training and scoring datasets. On the other hand, if the plan is to launch new products after the pandemic ends (e.g., after a vaccine becomes available), we recommend using a training dataset that does not cover the pandemic period, so the AI will be able to learn the patterns and behaviors during ‘normal’ years.

banner purple waves bg

Experience the DataRobot AI Platform

Less Friction, More AI. Get Started Today With a Free 30-Day Trial.

Sign Up for Free
Consumer Packaged Goods
Explore More Retail Use Cases
Retailers deliver better demand forecasting, marketing efficiency, transparency in the supply chain, and profitability through advanced AI applications and a myriad of AI use cases throughout the value chain: from store-level demand and out-of-stock (OOS) predictions to marketing channel modeling and customer LTV predictions. With the abundance of consumer data, changing consumption patterns, global supply chain shakeups, and increased pressure to drive better forecasting, retail businesses can no longer ignore the potential of AI in their industry.