Deep Learning and User Consumption Trends Classification and Analysis Based on Shopping Behavior

Deep Learning and User Consumption Trends Classification and Analysis Based on Shopping Behavior

Yishu Liu, Jia Hou, Wei Zhao
Copyright: © 2024 |Pages: 23
DOI: 10.4018/JOEUC.340038
Article PDF Download
Open access articles are freely available for download

Abstract

Driven by the wave of digitalization, the booming development of the e-commerce industry urgently requires in-depth analysis of user shopping behavior to improve service experience. In view of the limitations of traditional models in dealing with complex shopping scenarios, this study innovatively proposes a deep learning model: the VATA model (a combination of variational autoencoder, transformer, and attention mechanism). Through this model, the authors strive to classify and analyze user shopping behavior more accurately and intelligently. Variational autoencoder (VAE) can learn the potential representation of user personalized historical data, capture the implicit characteristics of shopping behavior, and improve the ability to deal with actual shopping situations. Transformer models can more comprehensively capture the dependencies between shopping behaviors and understand shopping. The overall structure of behavior plays an important role.
Article Preview
Top

Methodology

Overview of Our Model

To solve the shortcomings of traditional methods in user behavior analysis, this article proposes the VATA model, which integrates three key modules: Variational Autoencoder (VAE), Transformer (T), and Attention Mechanism (A) to achieve user shopping behavior analysis. Deep learning classification and analysis.

In the VATA model, the VAE module is responsible for learning the potential representation of user personalized historical data. Through the ability to generate models, it can not only capture the implicit characteristics of shopping behavior, but also can generate new samples in the absence of data, providing a better Comprehensive understanding of users' personalized shopping behavior provides strong support. The Transformer module models the global relationship of user historical data. Through the self-attention mechanism, it can better capture the dependencies between shopping behaviors and help to better understand the overall structure of shopping behaviors, especially when processing Works well with long-distance dependencies. The Attention Mechanism module enhances the model's focus on important information in the user's shopping behavior sequence, making the model more focused on modeling individual shopping behaviors, improving the model's sensitivity to necessary time steps in user behavior, and making the model more flexible. The importance of adapting to different user behaviors.

Our model is built according to the following steps: First, for the input layer, user personalized historical data is used as input, including shopping behavior sequence, click records, browsing duration and other information. In the VAE module, feature learning is performed on user historical data to obtain a potential representation of the user's personalized behavior. In the Transformer module, global relationship modeling is used to more comprehensively capture the dependencies between shopping behaviors. In the Attention Mechanism module, the attention mechanism enhances attention to important information in the user behavior sequence. The last is the classification output layer, which uses the learned features to classify users into different shopping types.

The structural diagram of the overall model is shown in Figure 1.

Figure 1.

Overall Model Flow Chart

JOEUC.340038.f01

The running process of the VATA model is shown in Algorithm1.

Algorithm 1. VATA Model Training
Require: E-Commerce Dataset, Behavior Trajectory Dataset, Social Media Consumption Dataset, Temporal Shopping Dataset
Initialize VATA model parameters
Split datasets into training and testing sets
Initialize optimizer and loss function
for each epoch in training do
for each batch in training set do
Load batch of data (sequences, labels)
Encode sequences using VAE module
Apply Transformer module for global relationship modeling
Apply Attention Mechanism for enhanced feature attention
Calculate classification loss using encoded features and labels
Backpropagate the loss and update model parameters
end for
end for
Evaluate the model on testing set
Calculate Accuracy, Recall, F1 Score, AUC, etc.

Complete Article List

Search this Journal:
Reset
Volume 36: 1 Issue (2024)
Volume 35: 3 Issues (2023)
Volume 34: 10 Issues (2022)
Volume 33: 6 Issues (2021)
Volume 32: 4 Issues (2020)
Volume 31: 4 Issues (2019)
Volume 30: 4 Issues (2018)
Volume 29: 4 Issues (2017)
Volume 28: 4 Issues (2016)
Volume 27: 4 Issues (2015)
Volume 26: 4 Issues (2014)
Volume 25: 4 Issues (2013)
Volume 24: 4 Issues (2012)
Volume 23: 4 Issues (2011)
Volume 22: 4 Issues (2010)
Volume 21: 4 Issues (2009)
Volume 20: 4 Issues (2008)
Volume 19: 4 Issues (2007)
Volume 18: 4 Issues (2006)
Volume 17: 4 Issues (2005)
Volume 16: 4 Issues (2004)
Volume 15: 4 Issues (2003)
Volume 14: 4 Issues (2002)
Volume 13: 4 Issues (2001)
Volume 12: 4 Issues (2000)
Volume 11: 4 Issues (1999)
Volume 10: 4 Issues (1998)
Volume 9: 4 Issues (1997)
Volume 8: 4 Issues (1996)
Volume 7: 4 Issues (1995)
Volume 6: 4 Issues (1994)
Volume 5: 4 Issues (1993)
Volume 4: 4 Issues (1992)
Volume 3: 4 Issues (1991)
Volume 2: 4 Issues (1990)
Volume 1: 3 Issues (1989)
View Complete Journal Contents Listing