Latest Articles

## Estimating and Controlling the False Discovery Rate of the PC Algorithm Using Edge-specific P-Values

Many causal discovery algorithms infer graphical structure from observational data. The PC algorithm... (more)

## Detecting Causal Relationships in Simulation Models Using Intervention-based Counterfactual Analysis

Central to explanatory simulation models is their capability to not just show that but also why... (more)

## Stable Specification Search in Structural Equation Models with Latent Variables

In our previous study, we introduced stable specification search for cross-sectional data (S3C). It is an exploratory causal method that combines the... (more)

## Local Learning Approaches for Finding Effects of a Specified Cause and Their Causal Paths

Causal networks are used to describe and to discover causal relationships among variables and data generating mechanisms. There have been many... (more)

## Measuring Conditional Independence by Independent Residuals for Causal Discovery

We investigate the relationship between conditional independence (CI) x⫫ y|Z and the independence of two residuals... (more)

## Toward Accounting for Hidden Common Causes When Inferring Cause and Effect from Observational Data

Hidden common causes make it difficult to infer causal relationships from observational data. Here,... (more)

## BAMB: A Balanced Markov Blanket Discovery Approach to Feature Selection

The discovery of Markov blanket (MB) for feature selection has attracted much attention in recent years, since the MB of the class attribute is the optimal feature subset for feature selection. However, almost all existing MB discovery algorithms focus on either improving computational efficiency or boosting learning accuracy, instead of both. In... (more)

## Multi-View Fusion with Extreme Learning Machine for Clustering

Unlabeled, multi-view data presents a considerable challenge in many real-world data analysis tasks. These data are worth exploring because they often... (more)

## Take a Look Around: Using Street View and Satellite Images to Estimate House Prices

When an individual purchases a home, they simultaneously purchase its structural features, its accessibility to work, and the neighborhood amenities. Some amenities, such as air quality, are measurable while others, such as the prestige or the visual impression of a neighborhood, are difficult to quantify. Despite the well-known impacts intangible... (more)

##### NEWS

ACM Transactions on Intelligent Systems and Technology (TIST) is ranked as one of the best  journals in all ACM journals in terms of citations received per paper.

2019 Journal Metrics:

• - 2018 Impact Factor: 2.861
• - 2018 5-year Impact Factor: 3.971
• - Avg. Citations in ACM DL: 12.8

ACM Transactions on Intelligent Systems and Technology (ACM TIST) is a scholarly journal that publishes the highest quality papers on intelligent systems, applicable algorithms and technology with a multi-disciplinary perspective. An intelligent system is one that uses artificial intelligence (AI) techniques to offer important services (e.g., as a component of a larger system) to allow integrated systems to perceive, reason, learn, and act intelligently in the real world. READ MORE

#### Introduction to the ACM TIST Special Issue on Intelligent Edge Computing for Cyber Physical and Cloud Systems

Unified Generative Adversarial Networks for Multiple-Choice Oriented Machine Comprehension

In this paper, we address the multiple-choice Machine Comprehension (MC) problem in natural language processing (NLP). While existing approaches for MC are usually designed for general cases, we specially develop a novel method for solving the multiple-choice MC problem. We take the inspiration from Generative Adversarial Nets (GANs) and firstly propose an adversarial framework for multiple-choice oriented MC, named McGAN. Specifically, our approach is designed as a generative adversarial network-based method that unifies the generative model and the discriminative model. Working together, the generative model focuses on predicting relevant answers given a passage (text) and a question; the discriminative model focuses on predicting their relevancy given an answer-passage-question set. Based on the competition via adversarial training in a Minimize-Maximize game, the proposed method benefits from both models. To evaluate the performance, we test our McGAN model on three well-known datasets for multiple-choice MC. Our results show that McGAN can achieve a significant increase in accuracy, in comparison with existing models. It consistently outperforms all tested baselines, including the state-of-the-art techniques.

Social Science Guided Feature Engineering: A Novel Approach to Signed Link Analysis

Forecasting Price Trend of Bulk Commodities Leveraging Cross-domain Open Data Fusion

Forecasting price trend of bulk commodities is important in international trade, not only for markets participants to schedule production and marketing plans, but also for government administrators to adjust policies. Previous studies can not support accurate fine-grained short-term prediction, since they mainly focus on coarse-grained long-term prediction using historical data. Recently, cross-domain open data provides possibilities to conduct fine-grained price forecasting, since they can be leveraged to extract various direct and indirect factors of the price. In this paper, we predict the price trend over upcoming days, by leveraging cross-domain open data fusion. More specifically, we formulate the price trend into three classes (rise, slight-change and fall), and then predict the specific class in which the price trend of the future day lies. We take three factors into consideration: (1) supply factor considering sources providing bulk commodities, (2) demand factor focusing on vessel transportation with reflection of short time needs, and (3) expectation factor encompassing indirect features (e.g. air quality) with latent influences. A hybrid classification framework is proposed for the price trend forecasting. Evaluation conducted on nine real-world cross-domain open datasets shows that our framework can forecast the price trend accurately, outperforming multiple state-of-the-art baselines.

Web Table Extraction, Retrieval and Augmentation: A Survey

Tables are a powerful and popular tool for organizing and manipulating data. A vast number of tables can be found on the Web, which represent a valuable knowledge resource. The objective of this survey is to synthesize and present two decades of research on web tables. In particular, we organize existing literature into six main categories of information access tasks: table extraction, table interpretation, table search, question answering, knowledge base augmentation, and table augmentation. For each of these tasks, we identify and describe seminal approaches, present relevant resources, and point out interdependencies among the different tasks.

Is Rank Aggregation Effective in Recommender Systems? An Experimental Analysis

Recommender Systems are tools designed to help users find relevant information from the myriad of content available online. They work by actively suggesting items that are relevant to users according to their historical preferences or observed actions. Among recommender systems, top-N recommenders work by suggesting a ranking of N items that can be of interest to a user. Although a significant number of top-N recommender algorithms have been proposed in the literature, they often disagree in their returned rankings, offering an opportunity for improving the final recommendation ranking by aggregating the outputs of different algorithms. Rank aggregation was successfully used in a significant number of areas, but only a few rank aggregation methods have been proposed in the recommender systems literature. Furthermore, there is a lack of studies regarding rankings' characteristics and their possible impacts on the improvements achieved through rank aggregation. This work presents an extensive two-phase experimental analysis of rank aggregation in recommender systems. In the first phase, we investigate characteristics of rankings recommended by fifteen different top-N recommender algorithms regarding agreement and diversity. In the second phase, we look at the results of fourteen rank aggregation methods and identify different scenarios where they perform best or worst according to the input rankings' characteristics. Our findings suggest that some of the results reported in the literature may be biased to favorable scenarios to rank aggregation methods whereas adverse scenarios are underexplored. For instance, rank aggregation methods achieved improvements of up to 22% in Mean Average Precision (MAP) in the best scenario considered, while in the worst they present worst results than individual recommendation methods. We show that by looking at simple dataset characteristics and the average performance of the individual recommendation methods may give hints on whether it is worth aggregating their rankings or not.

Copula-based anomaly scoring and localization for large-scale, high-dimensional continuous data

The anomaly detection method presented by this paper has a special feature: it does not only indicate whether an observation is anomalous or not but also tells what exactly makes an anomalous observation unusual.Hence, it provides support to localize the reason of the anomaly.The proposed approach is model-based; it relies on the multivariate probability distribution associated with the observations. Since the rare events are present in the tails of the probability distributions, we use copula functions, that are able to model the fat-tailed distributions well. The presented procedure scales well; it can cope with a large number of high-dimensional samples.In the second part of the paper, we demonstrate the usability of the method through a case study, where we analyze a large data set consisting of the performance counters of a real mobile telecommunication network. Since such networks are complex systems, the signs of sub-optimal operation can remain hidden fora potentially long time. With the proposed procedure, many such hidden issues can be isolated and indicated to the network operator.

Using Sub-Optimal Plan Detection to Identify Commitment Abandonment in Discrete Environments

Assessing whether an agent has abandoned a goal or is actively pursuing it is important when multiple agents are trying to achieve joint goals, or when agents commit to achieving goals for each other. Making such a determination for a single goal by observing only plan traces is not trivial as agents often deviate from optimal plans for various reasons, including the pursuit of multiple goals or the inability to act optimally. In this article, we develop an approach based on domain independent heuristics from automated planning, landmarks, and fact partitions to identify sub-optimal action steps with respect to a plan within a plan execution trace. Such capability is very important in domains where multiple agents cooperate and delegate tasks among themselves, e.g. through social commitments, and need to ensure that a delegating agent can infer whether or not another agent is actually progressing towards a delegated task. We empirically show, for a number of representative domains, that our approach infers sub-optimal action steps with very high accuracy and detects commitment abandonment in nearly all cases.

DHPA: Dynamic Human Preference Analytics Framework --- A Case Study on Taxi Drivers' Learning Curve Analysis

Many real world human behaviors can be modeled and characterized as sequential decision making processes, such as taxi driver?s choices of working regions and times. Each driver possesses unique preferences on the sequential choices over time and improves their working efficiency. Understanding the dynamics of such preferences helps accelerate the learning process of taxi drivers. Prior works on taxi operation management mostly focus on finding optimal driving strategies or routes, lacking in-depth analysis on what the drivers learned during the process and how they affect the performance of the driver. In this work, we make the first attempt to establish Dynamic Human Preference Analytics (DHPA). We inversely learn the taxi drivers? preferences from data and characterize the dynamics of such preferences over time. We extract two types of features, i.e., profile features and habit features, to model the decision space of drivers. Then through inverse reinforcement learning we learn the preferences of drivers with respect to these features. The results illustrate that self-improving drivers tend to keep adjusting their preferences to habit features to increase their earning efficiency, while keeping the preferences to profile features invariant. On the other hand, experienced drivers have stable preferences over time. The exploring drivers tend to randomly adjust the preferences over time.

Transfer Learning with Dynamic Distribution Adaptation

Transfer learning aims to learn robust classifiers for the target domain by leveraging knowledge from a source domain. Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions. However, in real applications, the marginal and conditional distributions usually have different contributions to the domain discrepancy. Existing methods fail to quantitatively evaluate the different importance of these two distributions, which will result in unsatisfactory transfer performance. In this paper, we propose a novel concept called Dynamic Distribution Adaptation (DDA), which is capable of quantitatively evaluating the relative importance of each distribution. DDA can be easily incorporated into the framework of structural risk minimization to solve transfer learning problems. On the basis of DDA, we propose two novel learning algorithms: (1) Manifold Dynamic Distribution Adaptation (MDDA) for traditional transfer learning, and (2) Dynamic Distribution Adaptation Network (DDAN) for deep transfer learning. Extensive experiments demonstrate that MDDA and DDAN significantly improve the transfer learning performance and setup a strong baseline over the latest deep and adversarial methods on digits recognition, sentiment analysis, and image classification. More importantly, it is shown that marginal and conditional distributions have different contributions to the domain divergence, and our DDA is able to provide good quantitative evaluation of their relative importance which leads to better performance. We believe this observation can be helpful for future research in transfer learning.

Newton Methods for Convolutional Neural Networks

Deep learning involves a difficult non-convex optimization problem, which is often solved by stochastic gradient (SG) methods. While SG is usually effective, it may not be robust in some situations. Recently, Newton methods have been investigated as an alternative optimization technique, but nearly all existing studies consider only fully-connected feedforward neural networks. They do not investigate other types of networks such as Convolutional Neural Networks (CNN), which are more commonly used in deep-learning applications. One reason is that Newton methods for CNN involve complicated operations, and so far no works have conducted a thorough investigation. In this work, we give details of all building blocks including function, gradient, and Jacobian evaluation, and Gauss-Newton matrix-vector products. These basic components are very important because with them further developments of Newton methods for CNN become possible. We show that an efficient MATLAB implementation can be done in just several hundred lines of code and demonstrate that the Newton method gives competitive test accuracy.

Travel Recommendation via Fusing Multi-Auxiliary Information into Matrix Factorization

As an e-commerce feature, the personalized recommendation is invariably highly-valued by both consumers and merchants. The e-tourism has become one of the hottest industries with the adoption of recommendation systems. Several lines of evidence have confirmed the travel-product recommendation is quite different from traditional recommendations. Travel products are usually browsed and purchased relatively infrequently compared with other traditional products (e.g., books, food, etc.), which gives rise to the extreme sparsity of travel data. Meanwhile, the choice of a suitable travel product is affected by an army of factors such as departure, destination, financial and time budget. To address these challenging problems, in this paper, we propose a Probabilistic Matrix Factorization with Multi-Auxiliary Information (PMF-MAI) model in the context of the travel-product recommendation. In particular, PMF-MAI is able to fuse the probabilistic matrix factorization on the user-item interaction matrix with the linear regression on a suite of features constructed by the multiple-auxiliary information. In order to fit for the sparse data, PMF-MAI is built by the semi-supervised learning approach which utilizes unobserved data to increase the coupling between probabilistic matrix factorization and linear regression. Extensive experiments are conducted on a real-world dataset provided by a large tourism e-commerce company. PMF-MAI shows an overwhelming superiority over all competitive baselines on the recommendation performance. Also, the importance of features is examined to reveal the crucial auxiliary information having a great impact on the adoption of travel products.

A Visual Analysis Approach for Understanding Durability Test Data of Automotive Products

In the current era of Industry 4.0, people are facing data-rich manufacturing environments. Visual analytics, as an important technology for explaining and understanding complex data, has been increasingly introduced into industrial data analysis scenarios. Taking the durability test of automotive starter as background, this paper proposes a visual analysis approach for understanding large-scale and long-term starter durability test data. Guided by detailed scenario and requirement analyses, we first propose a migration-adapted DBSCAN algorithm to identify starting modes and abnormal tests. This algorithm adopts a segmentation strategy and a group of matching and updating operations to achieve an efficient and accurate clustering analysis on the data. Next, we design and implement a visual analysis system that provides a set of user-friendly visual designs and lightweight interactions to help people gain data insights on test process overview, test data patterns and durability performance dynamics. Finnaly, we conduct a quantitative algorithm evaluation, a case study and a user interview by using real-world starter duarbility test datasets. The result demonstrates the effectiveness of the approach and its possible inspiration to the durability test data analysis of other similar industrial products.

Discovering Interesting Sub-Paths with Statistical Significance from Spatio-temporal Datasets

Given a path in a spatial or temporal framework, we aim to find all contiguous sub-paths that are both interesting (e.g., abrupt changes) and statistically significant (i.e., persistent trends rather than local fluctuations). Discovering interesting sub-paths can provide meaningful information for a variety of domains including Earth science, environmental science and urban planning, etc. Existing methods are limited to detecting individual points of interest along an input path but cannot find interesting sub-paths. Our preliminary work provided a Sub-path Enumeration and Pruning (SEP) algorithm to detect interesting sub-path of arbitrary length. However, SEP is not effective in avoiding sub-paths that are random variations rather than meaningful trends, which hampers clear and proper interpretations of the results. In this paper, we extend our previous work by proposing a statistical significance test framework to eliminate these random variations. To compute the statistical significance, we first show a baseline Monte-Carlo method based on our previous work and then propose a Dynamic Search-and-Prune (D-SAP) algorithm to improve its computational efficiency. Our experiments show that the significance testing can greatly suppress the noisy detections in the output and D-SAP can greatly reduce the execution time.

Pair-based Uncertainty and Diversity Promoting Early Active Learning for Person Re-identification

The effective training of supervised Person Re-identification (Re-ID) models requires sufficient pairwise labeled data. However, when there is limited annotation resource, it is difficult to collect pairwise labeled data. We consider a challenging and practical problem called Early Active Learning, which is applied to the early stage of experiments when there is no pre-labeled sample available as references for human annotating. Previous early active learning methods suffer from two limitations for Re-ID. First, these instance-based algorithms select instances rather than pairs, which can result in missing optimal pairs for Re-ID. Second, most of these methods only consider the representativeness of instances, which can result in selecting less diverse and less informative pairs. To overcome these limitations, we propose a novel pair-based active learning for Re-ID. Our algorithm selects pairs instead of instances from the entire dataset for annotation. Besides representativeness, we further take into account the uncertainty and the diversity in terms of pairwise relations. Therefore, our algorithm can produce the most representative, informative and diverse pairs for Re-ID data annotation. Extensive experimental results on five benchmark person re-identification datasets have demonstrated the superiority of the proposed pair-based early active learning algorithm.

Trembr: Exploring Road Networks for Trajectory Representation Learning

In this paper, we propose a novel representation learning framework, namely TRajectory EMBedding via Road networks (Trembr), to learn trajectory embeddings (low dimensional feature vectors) for use in a variety of trajectory applications. The novelty of Trembr lies in 1) the design of a recurrent neural network (RNN) based encoder-decoder model, namely Traj2Vec, that encodes spatial and temporal properties inherent in trajectories into trajectory embeddings, while exploiting the underlying road networks to constrain the learning process, and 2) the design of a neural network based model, namely Road2Vec, to learn road segment embeddings in road networks that captures various relationships amongst road segments in preparation for trajectory representation learning. In addition to model design, several unique technical issues raising in Trembr, including data preparation in Road2Vec, the road segment relevance-aware loss and the network topology constraint in Traj2Vec, are examined. To validate our ideas, we learn trajectory embeddings using multiple large-scale real-world trajectory datasets, and use them in three tasks, including trajectory similarity measure, travel time prediction and destination prediction. Empirical results show that Trembr soundly outperforms the state-of-the-art trajectory representation learning models, trajectory2vec and t2vec, by at least one order of magnitude in terms of mean rank in trajectory similarity measure, 23.3\% to 41.7\% of mean absolute error (MAE) in travel time prediction, and 39.6\% to 52.4\% of MAE in destination prediction.

Market Clearing based Dynamic Multi-Agent Task Allocation

Single Image Snow Removal Using Sparse Representation and Particle Swarm Optimizer

Images are often corrupted by natural obscuration (e.g., snow, rain, and haze) during acquisition in bad weather conditions. The removal of snowflakes from only a single image is a challenging task due to situational variety and has been investigated only rarely. In this paper, we propose a novel snow removal framework for a single image, which can be separated into a sparse image approximation module and an adaptive tolerance optimization module. The first proposed module takes the advantage of sparsity-based regularization to reconstruct a potential snow-free image. An auto-tuning mechanism for this framework is then proposed to seek a better reconstruction of a snow-free image via the time-varying inertia weight particle swarm optimizers in the second proposed module. Through collaboration of these two modules iteratively, the number of snowflakes in the reconstructed image is reduced as generations progress. By the experimental results, the proposed method achieves a better efficacy of snow removal than do other state-of-the-art techniques via both objective and subjective evaluations. As a result, the proposed method is able to remove snowflakes successfully from only a single image while preserving most original object structure information.

Graph-based recommendation meets Bayes and similarity measures

Graph-based approaches provide an effective memory-based alternative to latent factor models for collaborative recommendation. Modern approaches rely on either sampling short walks or enumerating short paths starting from the target user in a user-item bipartite graph. While the effectiveness of random walk sampling heavily depends on the underlying path sampling strategy, path enumeration is sensitive to the strategy adopted for scoring each individual path. In this paper, we demonstrate how both strategies can be improved through Bayesian reasoning. In particular, we propose to improve random walk sampling by exploiting distributional aspects of itemss ratings on the sampled paths. Likewise, we extend existing path enumeration approaches to leverage categorical ratings and to scale the score of each path proportionally to the affinity of pairs of users and pairs of items on the path. Experiments on several publicly available datasets demonstrate the effectiveness of our proposed approaches compared to state-of-the-art graph-based recommenders.

Comparison and Modelling of Country-Level Micro-blog User Behaviour and Activity in Cyber-Physical-Social Systems using Weibo and Twitter Data

As the rapid development of social media technologies, cyber-physical-social system (CPSS) has been a hot topic in many industrial applications. The use of ?micro-blogging? service, such as Twitter, has rapidly become an influential way to share information. While recent studies have revealed that understanding and modelling micro-blog user behavior on massive users? behaviors data in social media in CPSS are very keen to success of many practical applications, a key challenge in the literature is that the diversity of geographic and cultures strongly affect micro-blog user behavior and activity. The motivation of this paper is to understand differences and similarities between the behaviors of users from different countries using social networking platforms, and to attempt to build up a Country-Level Micro-Blog User (CLMB) behavior and activity model for CPSS applications. We proposed a Country-Level Micro-Blog User (CLMB) behavior and activity model for analysis micro-blogging user?s behavior across different countries in the CPSS applications. This CLMB model has considered three important user behavior characteristics including content of micro-blogging, user emotion index and user relationship network. Based on the CUBM model, under the sample dataset, 16 countries with the largest number of representative and active users in the world were selected, and the characteristics of user microblog behavior in these 16 countries were analyzed. The experimental results show that for countries with small population and strong cohesiveness, users pay more attention to the social function of micro-blogging; on the contrary, in countries with large loose social groups, users use micro-blogging as a news dissemination platform to further analyze the micro-blogs of these countries. The blog's characterization data shows that users in countries whose social network structure exhibits reciprocity rather than hierarchy will use more linguistic elements to express happiness in micro-blogging.

FROST: Movement History-conscious Facility Relocation

Facility relocation (FR) problem, which aims to optimize the placement of facilities to accommodate the changes of users' locations, has a broad spectrum of applications. Despite the significant progress made by existing solutions to the FR problem, they all assume each user is stationary and represented as a single point. Unfortunately, in reality, objects (e.g., people, animals) are mobile. Consequently, these efforts may fail to identify superior solution to the FR problem. In this paper, for the first time, we take into account movement history of users and introduce a novel FR problem, called MOTION-FR, to address the above limitation. Specifically, we present a framework called FROST to address it. FROST comprises of two exact algorithms, index-based and index-free. The former is designed to address the scenario when facilities and objects are known apriori whereas the latter solves the MOTION-FR problem by jettisoning this assumption. Further, we extend the index-based algorithm to solve the general k-MOTION-FR problem, which aims to relocate k inferior facilities. We devise an approximate solution due to NP-hardness of the problem. Experimental study over both real-world and synthetic datasets demonstrates the superiority of our framework in comparison to state-of-the-art FR techniques in efficiency and effectiveness.

Strategic Attack & Defense in Security Diffusion Games

Security games model the confrontation between a defender protecting a set of targets and an attacker who tries to capture them. A variant of these games assumes security interdependence between targets, facilitating contagion of an attack. So far only stochastic spread of an attack has been considered. In this work, we introduce a version of security games where the attacker strategically drives the entire spread of attack and where interconnections between nodes affect their susceptibility to be captured. We find that the strategies effective in the settings without contagion or with stochastic contagion are no longer feasible when spread of attack is strategic. While in the former settings it was possible to efficiently find optimal strategies of the attacker, doing so in the latter setting turns out to be an NP-complete problem for an arbitrary network. However, for some simpler network structures, such as cliques, stars, and trees, we show that it is possible to efficiently find optimal strategies of both players. Next, for arbitrary networks, we study and compare the efficiency of various heuristic strategies. As opposed to previous works with no or stochastic contagion, we find that centrality-based defense is often effective when spread of attack is strategic.

Flexible Multi-modal Hashing for Scalable Multimedia Retrieval

Multi-modal hashing methods could support efficient multimedia retrieval by combining multi-modal features for binary hash learning at the both offline training and online query stages. However, existing multi-modal methods cannot binarize the queries, when only one or part of modalities are provided. In this paper, we propose a novel \emph{Flexible Multi-modal Hashing} (FMH) method to address this problem. FMH learns multiple modality-specific hash codes and multi-modal collaborative hash codes simultaneously within a single model. The hash codes are flexibly generated according to the newly coming queries, which provide any one or combination of modality features. Besides, the hashing learning procedure is efficiently supervised by the pair-wise semantic matrix to enhance the discriminative capability. It could successfully avoid the challenging symmetric semantic matrix factorization and $O(n^2)$ storage cost of semantic matrix. Finally, we design a fast discrete optimization to learn hash codes directly with simple operations. Experiments validate the superiority of the proposed approach.

Exploring Correlation Network for Cheating Detection

The correlation network, typically formed by computing pairwise correlations between variables, has recently become a competitive paradigm to discover insights in various application domains, such as climate prediction, financial marketing, and bioinformatics. In this study, we adopt this paradigm to detect cheating behavior hidden in business distribution channels, where falsified big deals are often made by collusive partners to obtain lower product prices --- a behavior deemed to be extremely harmful to the sale ecosystem. To this end, we assume that abnormal deals are likely to occur between two partners if their purchase-volume sequences have a strong negative correlation. This seemingly intuitive rule, however, imposes several research challenges. First, existing correlation measures are usually symmetric and thus cannot distinguish the different roles of partners in cheating. Second, the tick-to-tick correspondence between two sequences might be violated due to the possible delay of purchase behavior, which should also be captured by correlation measures. Finally, the fact that any pair of sequences could be correlated may result in a number of false-positive cheating pairs, which need to be corrected in a systematic manner. To address these issues, we propose a correlation network analysis framework for cheating detection. In the framework, we adopt an asymmetric correlation measure to distinguish the two roles, namely, cheating seller and cheating buyer, in a cheating alliance. Dynamic time warping is employed to address the time offset between two sequences in computing the correlation. We further propose two graph-cut methods to convert the correlation network into a bipartite graph to rank cheating partners, which simultaneously helps to remove false-positive correlation pairs. Based on a 4-year real-world channel dataset from a world-wide IT company, we demonstrate the effectiveness of the proposed method in comparison to competitive baseline methods.

XLearn: Learning Activity Labels Across Heterogeneous Datasets

Sensor-driven systems often need to map sensed data into meaningfully-labelled activities in order to classify the phenomena being observed. A motivating and challenging example comes from human activity recognition in which smart home and other datasets are used to classify human activities to support applications such as ambient assisted living, health monitoring, and behavioural intervention. Building a robust and meaningful classifier needs annotated ground truth, labelled with what activities are actually being observed -- and acquiring high-quality, detailed, continuous annotations remains a challenging, time-consuming, and error-prone task, despite considerable attention in the literature. In this paper we use knowledge-driven ensemble learning to develop a technique that can combine classifiers built from individually-labelled datasets, even when the labels are sparse and heterogeneous. The technique both relieves individual users of the burden of annotation, and allows activities to be learned individually and then transferred to a general classifier. We evaluate our approach using four third-party, real-world smart home datasets and show that it enhances activity recognition accuracies even when given only a very small amount of training data.

Discovering Underlying Plans Based on Shallow Models

Plan recognition aims to discover target plans (i.e., sequences of actions) behind observed actions, with history plan libraries or action models in hand. Previous approaches either discover plans by maximally matching' observed actions to plan libraries, assuming target plans are from plan libraries, or infer plans by executing action models to best explain the observed actions, assuming that complete action models are available. In real world applications, however, target plans are often not from plan libraries, and complete action models are often not available, since building complete sets of plans and complete action models are often difficult or expensive. In this paper we view plan libraries as corpora and learn vector representations of actions using the corpora; we then discover target plans based on the vector representations. Specifically, we propose two approaches, DUP and RNNPlanner, to discover target plans based on vector representations of actions. DUP explores the EM-style framework to capture local contexts of actions and discover target plans by optimizing the probability of target plans, while RNNPlanner aims to leverage long-short term contexts of actions based on RNNs (recurrent neural networks) framework to help recognize target plans. In the experiments, we empirically show that our approaches are capable of discovering underlying plans that are not from plan libraries, without requiring action models provided. We demonstrate the effectiveness of our approaches by comparing its performance to traditional plan recognition approaches in three planning domains. We also compare DUP and RNNPlanner to see their advantages and disadvantages.

Robust Misinformation Detection Over Time and Attack

In this study, we examine the impact of time on state-of-the-art news veracity classifiers. We show that as time progresses classification performance for both unreliable news and hyper-partisan news slowly degrades. While this degradation does happen, it happens much slower than initially expected, illustrating content-based features, such as style of writing, are robust to changes in the news cycle. We show that this small degradation can be mitigated using online learning. Lastly, we examine the impact of adversarial content manipulation by malicious news producers over time. Specifically, we test three attacks based on changes in the input space and data availability.

###### All ACM Journals | See Full Journal Index

Search TIST
enter search term and/or author name