Traffic flow forecasting is a crucial task in urban computing. The challenge arises as traffic flows often exhibit intrinsic and latent spatio-temporal correlations that cannot be identified by extracting the spatial and temporal patterns of traffic data separately. We argue that such correlations are universal and play a pivotal role in traffic flow. We put forward {spacetime interval learning} as a paradigm to explicitly capture these correlations through a unified analysis of both spatial and temporal features. Unlike the state-of-the-art methods, which are restricted to a particular road network, we model the universal spatio-temporal correlations that are transferable from cities to cities. To this end, we propose a new spacetime interval learning framework that constructs a local-spacetime context of a traffic sensor comprising the data from its neighbors within close time points. Based on this idea, we introduce \local {spacetime neural network} (STNN), which employs novel spacetime convolution and attention mechanism to learn the universal spatio-temporal correlations. The proposed STNN captures local traffic patterns, which does not depend on a specific network structure. As a result, a trained STNN model can be applied on any unseen traffic networks. We evaluate the proposed STNN on two public real-world traffic datasets and a simulated dataset on dynamic networks. The experiment results show that STNN not only improves prediction accuracy by 15\% over state-of-the-art methods, but is also effective in handling the case when the traffic network undergoes dynamic changes as well as the superior generalization capability.
Social structures naturally arise from social networks, yet no model well interprets the emergence of structural properties in a unified dimension. Here, we unify explanations for the emergence of network structures by revealing the pivotal role of social capital, i.e., benefits that a society grants to individuals, in network formation. We propose a game-based framework social capital games that mathematically conceptualizes social capital. Through this framework, individuals are regarded as independent learning agents that aim to gain social capital via building interpersonal ties. We adopt multi-agent reinforcement learning (MARL) to train agents. By varying configurations of the game, we observe the emergence of classical structures of community, small-world, and core-periphery.
Influence maximization is a well-investigated problem which asks for key individuals who have significant influence in a given social network. This paper addresses this problem when the social network structure is hidden. We adopt the framework of influence learning from samples and build a neural network model to represent the information diffusion process. Based on the model, we propose two new algorithms NeuGreedy and NeuMax. NeuGreedy simulates the traditional greedy algorithm whilst NeuMax utilizes the weights of connections between neurons. We test the algorithms on both synthetic and real-world datasets. The results verify the effectiveness of the proposed methods as compared to existing algorithms with or without the network structure.
Interpersonal ties are pivotal to individual efficacy, status and performance in an agent society. This paper explores three important and interrelated themes in social network theory: the center/periphery partition of the network; network dynamics; and social integration of newcomers. We tackle the question: How would a newcomer harness information brokerage to integrate into a dynamic network going from periphery to center? We model integration as the interplay between the newcomer and the dynamics network and capture information brokerage using a process of relationship building. We analyze theoretical guarantees for the newcomer to reach the center through tactics; proving that a winning tactic always exists for certain types of network dynamics. We then propose three tactics and show their superior performance over alternative methods on four real-world datasets and four network models.
The recent mean field game (MFG) formalism has enabled the application of inverse reinforcement learning (IRL) methods in large-scale multi-agent systems, with the goal of inferring reward signals that can explain demonstrated behaviours of large populations. The existing IRL methods for MFGs are built upon reducing an MFG to a Markov decision process (MDP) defined on the collective behaviours and average rewards of the population. However, we reveal that the reduction from MFG to MDP holds only for the fully cooperative setting. This limitation invalidates existing IRL methods on MFGs with non-cooperative environments. To measure more general behaviours in large populations, we study the use of individual behaviours to infer ground-truth reward functions for MFGs. We propose Mean Field IRL (MFIRL), the first IRL framework for MFGs that can handle both cooperative and non-cooperative environments. Based on this theoretically justified framework, we develop a practical algorithm effective for MFGs with continuous state-action spaces and unknown dynamics. We evaluate MFIRL on both cooperative and mixed cooperative-competitive scenarios with many agents. Results show that MFIRL excels in reward recovery, sample efficiency and robustness in the face of changing dynamics, compared to the method based on the reduction to MDP.
Private data query combines mechanism design with privacy protection to produce aggregated statistics from privately-owned data records. The problem arises in a data marketplace where data owners have personalised privacy requirements and private data valuations. We focus on the case when the data owners are single-minded, i.e., they are willing to release their data only if the data broker guarantees to meet their announced privacy requirements. For a data broker who wants to purchase data from such data owners, we propose the Single MindedQuery (SMQ) mechanism, which uses a reverse auction to select data owners and determine compensations. SMQ satisfies interim incentive compatibility, individual rationality, and budget feasibility. Moreover, it uses purchased privacy expectation maximisation as a principle to produce accurate outputs for commonly-used queries such as counting, median and linear predictor. The effectiveness of our method is empirically validated by a series of experiments.
In studies of social dynamics, cohesion refers to a group’s tendency to stay in unity, which – as argued in sociometry — arises from the network topology of interpersonal ties. We follow this idea and propose a game-based model of cohesion that not only relies on the social network, but also reflects individuals’ social needs. In particular, our model is a type of cooperative games where players may gain popularity by strategically forming groups. A group is socially cohesive if the grand coalition is core stable. We study social cohesion in some special types of graphs and draw a link between social cohesion and a classical notion of structural cohesion by White and Harary. We then focus on the problem of deciding whether a given social network is socially cohesive and show that this problem is CoNP-complete. Nevertheless, we give two efficient heuristics for coalition structures where players enjoy high popularity and experimentally evaluate their performances.
The establishment of interpersonal ties is a pivotal problem in the structural analysis of social networks. In particular, link recommendation problem asks for valuable future links to establish by an individual. Existing methods for this problem rely on link prediction that evaluates the likelihood of successful tie creation between two individuals. Such methods do not consider the social capital gained by agents, nor do they concern with the required cost of this process. In light of this limitation, we propose a utility-based network building problem, with an aim to strike a balance between the gained social capital - in the form of closeness centrality - and the cost of establishing ties. We propose algorithms to solve this problem over networks whose nodes may or may not be labelled with attributes, and test their performance on a range of synthesized and real-world social networks. By having multiple agents adopting utility-based network building strategies, we propose a suite of models of network formation and demonstrate empirically that the they capture important structural properties. In particular, we investigate the emergence of a core/periphery structure as a joint result of preferential attachment and network building strategies.
The gist can be viewed as an abstract concept that represents only the quintessential meaning derived from a single or multiple sources of information. We live in an age where vast quantities of information are widely available and easily accessible. Identifying the gist contextualises information which facilitates the fast disambiguation and prediction of related concepts bringing about a set of natural relationships defined between information sources. In this paper, we investigate and introduce a novel unsupervised gist extraction and quantification framework that represents a computational form of the gist based on notions from fuzzy trace theory. To evaluate our purposed framework, we apply the gist to the task of semantic similarity, specifically to few-shot large document classification where documents on average have a large number of words. The results show our proposed gist representation can effectively capture the essential information from a text document while dramatically reducing the features used.
Mandy is a primary care chatbot system created to assist healthcare staffs by automating the patient intake process. The chatbot interacts with a patient by carrying out an interview, understanding their chief complaints in natural language, and submitting reports to the doctors for further analysis. The system provides a mobile-app front end for the patients, a diagnostic unit, and a doctor’s interface for accessing patient records. The diagnostic unit consists of three main modules: An analysis engine for understanding patients symptom descriptions, a symptom-to-cause mapper for reasoning about potential causes, and a question generator for deriving further interview questions. The system combines data-driven natural language processing capability with knowledge-driven diagnostic capability. We evaluate our proof-of-concept on benchmark case studies and compare the system with existing medical chatbots.
Question answering (QA) has significantly benefitted from deep learning techniques in recent years. However, domain-specific QA remains a challenge due to the significant amount of data required to train a neural network. This paper studies the answer sentence selection task in the Bible domain and answer questions by selecting relevant verses from the Bible. For this purpose, we create a new dataset BibleQA based on bible trivia questions and propose three neural network models for our task. We pre-train our models on a large-scale QA dataset, SQuAD, and investigate the effect of transferring weights on model accuracy. Furthermore, we also measured the model accuracies with different answer context lengths and different Bible translations. We affirm that transfer learning has a noticeable improvement in the model accuracy. We achieved relatively good results with shorter context lengths, whereas longer context lengths decreased model accuracy. We also find that using a more modern Bible translation in the dataset has a positive effect on the task.
Designing systems that make accurate forecasts based on time dependent data is always a challenging and significant task. In this regard, a number of statistics and neural network-based models have been proposed for analyzing and forecasting time series datasets. In this paper, we propose a novel machine learning model for handling and predicting multivariate time series data. In our proposed model we focus on supervised learning technique in which (1) some features of time series dataset exhibit periodic behaviour and (2) time t is considered as an input feature. Due to periodic nature of multivariate time series datasets, our model is a simple neural network where the inputs to the single output source are assumed to be in the form A sin(Bt + C)x as opposed to the standard form inputs Ax + B. We train our proposed model on various datasets and compare our model’s performance with standard well-known models used in forecasting multivariate time series datasets. Our results show that our proposed model often outperforms other exiting models in terms of prediction accuracy. Moreover, our results show that the proposed model can handle time series data with missing values and also input data-values that are non-equidistant. We hope that the proposed model will be useful in fostering future research on designing accurate forecasting algorithms.
By exploiting the community affiliations of user accounts, an attacker may infer sensitive user attributes. This raises the problem of community structure deception (CSD), which asks for ways to minimally modify the network so that a given community structure maximally hides itself from community detection algorithms. We investigate CSD through an information-theoretic lens. To this end, we propose a community-based structural entropy to express the amount of information revealed by a community structure. This notion allows us to devise residual entropy minimization (REM) as an efficient procedure to solve CSD. Experimental results over 9 real-world networks and 6 community detection algorithms show that REM is very effective in obfuscating the community structure as compared to other benchmark methods.
Driver identification faces various challenges in real-time applications. These challenges include high dimensional input data, moderate accuracy, scalability issues and need for custom-built in-vehicle sensors. The conventional biometric solutions are either less accurate, unscalable, or costly for practical application. In those solutions, high accuracy comes at the cost of the preservation of privacy. Similarly, scalability and cost-effectiveness are inversely proportional to one another. This paper proposes a driver identification scheme to pare these challenges. The scheme uses data from the global positioning system (GPS) to learn an individual’s driving pattern, which is commonly deployed in in-car navigation systems and can also be found in general-purpose hand-held devices in the prevailing market. The proposed scheme is innovative in terms of providing a single solution for the existing challenges. It is more practical and scalable with large numbers of the drivers also being cost-effective and accurate, which makes it applicable in real-time applications with the least overhead costs for different resources. The consequent analysis and empirical results show that the scheme could identify drivers with significant accuracy given only GPS data.
WiFi access points are sources of considerable security risks as the wireless signals have the potential to leak important private information such as passwords. This paper examines the security issues posed by point-of-sale (POS) terminals which are widely used in WiFi-covered environments such as restaurants, banks and libraries. In particular, we envisage an attack model on passwords entered on POS terminals. We put forward the WiPOS, a password inference system based on wireless signals. Specifically, the WiPOS is a device-free system that uses two commercial off-the-shelf (COTS) devices to collect WiFi signals. Implementing a new keystroke segmentation algorithm and adopting support vector machine (SVM) classifiers with global alignment kernel (GAK), the WiPOS achieves an improvement on both keystroke recognition and password prediction. The experimental results show that the WiPOS can achieve more than 73% accuracy for 6-digit password with the top 100 candidates. This paper calls the community to take a closer look at the risks posed by the current ubiquitous WiFi devices.