
Chat Merchandising Engine: Python backend service that performs the integration between the front and the LLM service in this case, we are using the Open AI API. Data Service: It provides an API Rest to consume the business data entities available in the data platform. The following diagram shows the architecture of this high-level solution: Merchandising AI Web Platform: Web channel based on Vue through users using chat merchandising. All data entities are defined and documented. To do this, our architecture must meet three requirements: All data is exposed via APIs. Practical Example: Designing Chat Merchandising Overview It is a very simple idea to implement, and with a lot of business value for users, we are going to train our LLM model to be able to give a question, to know which data service provides the information. It would be great if we can give the user the ability to get this explanation in natural language instead of the user having to interpret the metrics. this enhances comprehension and converts data into valuable information.

Story Telling When you provide data to the business user about the sharing of the sale, a fundamental part is storytelling. The answers will be based on the updated data. Iterative Business Analytical Questions Allow business users to ask iterative questions about the data we have in our data platform with the following capabilities: Being able to ask questions in natural languageIt can be interactive, but it must also allow the user to save his personalized questions. Generative AI Merchandising Platform Use Cases To enhance our merchandising platform, we can include two use cases: 1.
#Splunk transaction start and end time code#
LLM models allow us to interact in natural language with our users and translate their questions into code and calls to the APIs in our platform that will be able to provide valuable information to them in an agile way. All these platforms require some technical knowledge. In the last few years, there have been low code solutions on the market that try to speed up the development of applications precisely to respond as quickly as possible to the needs of this type of user. Most of the time, the information is available, but there is no time to include it in the web application. The problem is that not all users have the same questions, and sometimes the level of customization is so high that it turns the solution into a big whale.

#Splunk transaction start and end time series#
These applications usually have a series of restrictions that mainly show a generic type of analysis, which users can filter or segment based on some filters and provide information such as: Sales behavior Sell-through Stockouts Stock behavior All these data, with greater or lesser granularity, answer questions that someone has previously determined. Let's say we provide our retail merchandising managers with a web application or a mobile application where they can analyze sales and stock behavior in real-time using natural language. In this article, we will explain how using the new Generative AI Models (LLM) can improve the experience of business users on our analytical platform.
