Answering the Why-not Questions of Graph Query Autocompletion
Guozhong Li, Nathan Ng (Presenter), Peipei Yi, Zhiwei Zhang, Byron Choi
23rd International Conference on Database System for Advanced Application (DASFAA2018), Gold Coast, Australia
Graph query autocompletion (gQAC) helps users formulate graph queries in a visual environment (a.k.a GUI). It takes a graph query that the user is formulating as input and generates a ranked list of query suggestions. Since it is impossible to accurately predict the user's target query, the current state-of-the-art of gQAC sometimes fails to produce useful suggestions. In such scenario, it is natural for the user to ask why useful suggestions are not returned. In this paper, we address the why-not questions of gQAC. Specifically, given an intermediate query q, a target graph qt, and a gQAC system X, the why-not questions of gQAC seek for the minimal refinement of the configuration of X, with respect to a penalty model, such that at least one useful suggestion towards qt appears in the returned suggestions. In this paper, we propose a generic ranking function for existing gQAC systems. We present possible solutions to the why-not questions based on intuition from observing simulations using existing gQAC systems. We propose a search algorithm for the why-not questions. An extensive experiment evaluation verifies both the effectiveness and efficiency of the proposed algorithm.
FGreat: Focused Graph Query Autocompletion
Nathan Ng, Peipei Yi, Zhiwei Zhang, Byron Choi, Sourav S. Bhowmick, Jianliang Xu
35th IEEE International Conference on Data Engineering (ICDE 2019), Macau SAR, China
Composing queries is evidently a tedious task. This is particularly true of graph queries as they are typically complex and prone to errors, compounded by the fact that graph schemas can be missing or too loose to be helpful for query formulation. Graph Query AutoCompletion (gQAC) has received increasing research attention to alleviate users from the potentially painstaking task of graph query formulation. This demonstration presents an interactive visual Focused GRaph quEry AutocompleTion framework, called FGreat. Its novelty relies on the user focus for gQAC, which is a subgraph of the current query that a user is focusing on. FGreat attempts to automatically complete the query at the focus, as opposed to an arbitrary query subgraph. Specifically, given a large collection of small or medium-sized data graphs and a visual query fragment q currently constructed by a user, FGreat returns the top-k query suggestions at the focus. This demonstration presents two approaches which could be applied in different circumstances to automatically compute the user focus. Specifically, FGreat realizes the concept of user focus computation by (i) the sequence of edges that have been added to q, or (ii) the position of the mouse cursor. We demonstrate that the user focus enhances both the effectiveness and efficiency of graph query autocompletion.
MFocus: Graph Query Autocompletion at User’s Cursor
Peipei Yi, Nathan Ng, Byron Choi, Sourav S. Bhowmick, Jianliang Xu
Composing queries is evidently a tedious task. This is particularly true of graph queries as they are typically complex and prone to errors, compounded by the fact that graph schemas can be missing or too loose to be helpful for query formulation. Automatic query completion, graph query autocompletion has received increasing research attention to alleviates users from the potentially painstaking task of graph query formulation. In this demonstration, we present a novel interactive user-focus based visual subgraph query autocompletion framework. Given a large collection of small or medium-sized graphs and a visual query fragment q constructed by a user, we return top-k query suggestions at the predicted user focus. We demonstrate that user focus exists in visual subgraph query formulation and is effective to enhance query autocompletion.
Content-based Band Recommender System
Nathan Ng, Hu Junbo, Sun Jingxuan, Li Shiying
As the number of band songs arises every day, users may suffer from choosing new songs and have to spend lots of time exploring new bands. This paper aims to recommend new songs and bands to the user by analyzing both the audio features and lyrics of songs. In this project, our recommender system first acquires the user’s playlist and scratches the music data from the internet. We use the audio quantitative analysis to evaluate every song in 20 different dimensions and find songs with high proximity regarding our dataset. Next, our recommender system calculates the TF-IDF vector for the lyrics in a particular user playlist, and use the vector to find songs with similar topics and meanings. We use a ranking function, which allows users to tune the preference between music style similarity and topic similarity, to return top-k similar bands to the user. A simulation framework is also developed to test the effectiveness of our proposed system.