Jump to Content

Time-Dependent Representation for Neural Event Sequence Prediction

Nan Du
Samy Bengio
ICLR Workshop (2018)

Abstract

Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence. While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call. The time span between events can carry important information about the sequence dependence of human behaviors. To leverage continuous time in sequence prediction, we propose two methods for integrating time into event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization. We particularly focus on using these methods in recurrent neural networks, which have gained popularity in many sequence prediction tasks. We evaluated these methods as well as baseline models on two learning tasks: mobile app usage prediction and music recommendation. The experiments revealed that the proposed methods for time-dependent representation offer consistent gain on accuracy compared to baseline models that either directly use continuous time value in a recurrent neural network or do not use time.

Research Areas