ACM DL

ACM Transactions on

Intelligent Systems and Technology (TIST)

Menu
Latest Articles

Location-Specific Influence Quantification in Location-Based Social Networks

Location-based social networks (LBSNs) such as Foursquare offer a platform for users to share and be aware of each other’s physical movements.... (more)

NEWS

Recent TIST News: 

ACM Transactions on Intelligent Systems and Technology (TIST) is ranked as one of the best  journals in all ACM journals in terms of citations received per paper. Each paper published at TIST in the time span (from 2010 to 2018) has received 12.8 citations  on average in ACM Digital Library.  

ACM Transactions on Intelligent Systems and Technology (TIST) has been a success story.  Submissions to the journal have increase 76 percent from 2013 to 2015, from 278 original papers and revisions to 488.  Despite this increase, the journal acceptance rate has remained at a steady rate of approximately 24 percent. Furthermore, the TIST Impact Factor increased from 1.251 in 2014 to 3.19 in 2016.  


Journal Metric (2018)

  • - Impact Factor: 3.19
  • - 5-year Impact Factor: 10.47
  • - Avg. Citations in ACM DL: 12.8 

About TIST

ACM Transactions on Intelligent Systems and Technology (ACM TIST) is a scholarly journal that publishes the highest quality papers on intelligent systems, applicable algorithms and technology with a multi-disciplinary perspective. An intelligent system is one that uses artificial intelligence (AI) techniques to offer important services (e.g., as a component of a larger system) to allow integrated systems to perceive, reason, learn, and act intelligently in the real world. READ MORE

Forthcoming Articles

Measuring Conditional Independence by Independent Residuals for Causal Discovery

Local Learning Approaches for Finding Effects of a Specified Cause and Their Causal Paths

Causal networks are used to describe and to discover causal relationships among variables and data generating mechanisms. There have been many approaches for learning a global causal network of all observed variables. In many applications, we may be interested in finding what are the effects of a specified cause variable and what are the causal paths from the cause variable to its effects. Instead of learning a global causal network, we propose several local learning approaches for finding all effects (or descendants) of the specified cause variable and the causal paths from the cause variable to some effect variable of interest. We discuss the identifiability of the effects and the causal paths from observed data and prior knowledge. For the case that the causal paths are not identifiable, our approaches try to find a path set which contains the causal paths of interest.

Crowdsourcing Mechanism for Trust Evaluation in CPCS based on Intelligent Mobile Edge Computing

The combination of Cyber Physical Systems and Cloud Computing have received tremendous research interest and efforts from both academia and industry, which enables a new breed of applications and services and can fundamentally change the way that people interact with the physical world. However, due to the relative long distance between remote cloud and sensors, the Cloud Computing cannot provide real time service and fine-grained management for the end devices. Meanwhile, untrustworthy nodes may endanger the whole system. In this paper, we apply Intelligent Mobile Edge Computing to solve these problems. We first introduce the Mobile Crowdsourcing-Based Trust Evaluation Mechanism, where mobile edge users apply Artificial Intelligence to evaluate the trustfulness of sensor nodes. We then design two incentive mechanisms, i.e., Trustworthy Incentive Mechanism and Quality-Aware Trustworthy Incentive Mechanism. The first one aims to impel edge users to upload their real information about their capability and costs. The purpose of the second one is to motivate edge users to honestly conduct tasks and report results. Detailed theoretical analysis is performed to certify the effectiveness of the proposed mechanisms, which demonstrates the validity of Quality-Aware Trustworthy Incentive Mechanism from data trustfulness, effort trustfulness and quality trustfulness respectively. Extensive experiments are carried out to validate the proposed mechanisms. The results corroborate that the proposed mechanisms can efficiently stimulate mobile edge users to perform evaluation task honestly.

Short text analysis based on dual semantic extension and deep hashing in microblog

Short text analysis is a challenging task as far as the sparsity and limitation of semantics. The semantic extension approach learns the meaning of a short text by introducing external knowledge. However, for the randomness of short text descriptions in microblogs, traditional extension methods cannot accurately mine the semantics suitable for the microblog theme. Therefore, we use the prominent and refined hashtag information in microblogs, as well as complex social relationships to provide implicit guidance for semantic extension of short text. Specifically, we design a deep hash model based on social and conceptual semantic extension, which consists of dual semantic extension and deep hashing representation. In the extension method, the short text first is conceptualized to achieve the construction of hashtag graph under conceptual space. Then, the associated hashtags are generated by correlation calculation based on the integration of social relationships and concepts to extend the short text. In the deep hash model, we use the semantic hashing model to encode the abundant semantic features and form a compact and meaningful binary encoding. Finally, extensive experiments demonstrate that our method can learn and represent the short texts well by using more meaningful semantic signal. It can effectively enhance and guide the semantic analysis and understanding of short text in microblogs.

Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System

The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users? traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this paper, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize the revenue of network operators. Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.

Multi-modal Curriculum Learning over Graphs

Curriculum Learning (CL) is a recently proposed learning paradigm that aims to achieve satisfactory performance by properly organizing the learning sequence from simple curriculum examples to more difficult ones. Up to now, few works have been done to explore CL for the data with graph structure. Therefore, this paper proposes a novel CL algorithm that can be utilized to guide the Label Propagation (LP) over graphs, of which the target is to ?learn? the labels of unlabeled examples on the graphs. Specifically, we assume that different unlabeled examples have different levels of difficulty for propagation, and their label learning should follow a simple-to-difficult sequence with the updated curriculums. Furthermore, considering that the practical data are often characterized by multiple modalities, every modality in our method is associated with a ?teacher? that not only evaluates the difficulties of examples from its own viewpoint, but also cooperates with other teachers to generate the overall simplest curriculum examples for propagation. By taking the curriculums suggested by the teachers as a whole, the common preference (i.e. commonality) of teachers on selecting the simplest examples can be discovered by a row-sparse matrix, and their distinct opinions (i.e. individuality) are captured by a sparse noise matrix. As a result, an accurate curriculum sequence can be established and the propagation quality can thus be improved. Theoretically, we prove that the propagation risk bound is closely related to the examples? difficulty information, and empirically, we show that our method can generate higher accuracy than the state-of-the-art CL approach and LP algorithms on various multi-modal tasks.

A Trust Computing based Security Routing Scheme for Cyber Physical Systems

Security is a pivotal issue for Cyber Physical Systems (CPS)'s development. The trusted computing base of CPS includes the complete protection mechanisms, like hardware, firmware, software, the combination of which is responsible for enforcing a system security policy. A Trust Detection Based Secured Routing (TDSR) scheme is proposed to establish secured routes from source nodes to data center under malicious environment to achieve a satisfactory security level in the Cyber Physical Systems (CPS). In the TDSR scheme, sensor node in the routing from data center sends detection routing to identify relay nodes? trust. Thus, the trust of node can be obtained, then, data packet is routed through trustworthy nodes to sink securely. In TDSR scheme, due to the detection routing is executed in those nodes who have abundant energy, the lifetime cannot be affected. Performance evaluation through simulation is carried out for success routing ratio, compromised node detection ratio, and detection routing overhead. We found that performance can be improved in TDSR compared to previous schemes.

Using Social Dependence to Enable Neighbourly Behaviour in Open Multi-agent Systems

Agents frequently coordinate their behaviour and collaborate with their neighbours, which is especially needed when resources constrained, to achieve a shared goal or to accomplish a complex task that they cannot do alone. In agent neighbourhoods with a single shared resource, agents' cooperation and neighbourly behaviour is the key to any successful collaborative process. However, such behaviour is particularly challenging in open multi-agent multi-neighbourhood systems, where agents are self-interested and continuously and unpredictably leave and join neighbourhoods. In current approaches, social reasoning is used to capture agents' capabilities in disjoint neighbourhoods to support selection of a qualified set of participants to accomplish a complex task. However, these approaches are not useful in systems where agents do not depend on each other to accomplish complex tasks, but they may depend on each other when using shared resources and share the overall costs and benefits. In this paper, using social dependencies, agents are enabled to be cooperative and demonstrate good neighbourly behaviour in open multi-neighbourhood systems. Agents use both self-adaptation and social reasoning techniques to adjust their level of involvement in cooperative processes and to balance their level of self-interest and cooperation. Each agent builds and maintains a social dependency model, which enhances agents' understanding of their own goal dependencies on their neighbours. The dependency model enables agents to effectively adjust their behaviour or move between different neighbourhoods to contribute to lowering shared costs or increasing shared benefits. The proposed model is evaluated in a multi-neighbourhood setting with 100 agents sharing the constrained resources available in each neighbourhood under varying levels of agents' mobility and neighbourhoods' density. The results from the proposed model is compared to collaborative and competitive scenarios to evaluate agents' success at achieving multiple dependant goals while sharing constraint resources. The results obtained from the most dense and mobile scenario show a 97.6\% success rate at achieving the shared goal, with up to 50\% lower communication and computation cost, while 100\% individual goals are met.

Large-scale Frequent Episode Mining from Complex Event Sequences with Hierarchies

Frequent Episode Mining (FEM), which aims at mining frequent sub-sequences from single long event sequence, is one of the essential building blocks for sequence mining research field. Existing studies about FEM suffer from unsatisfied scalability when facing with complex sequences as it is an NP-complete problem for testing whether an episode occurs in a sequence. In this paper, we propose a scalable, distributed framework to support FEM on ?big? event sequences. As a rule of thumb, ?big? illustrates an event sequence is either very long or with masses of simultaneous events. Meanwhile, the events in this paper are arranged in a predefined hierarchy. It derives some abstractive events which can form episodes may not directly appear in the input sequence. Specifically, we devise an event-centered and hierarchy-aware partitioning strategy to allocate events from different levels of the hierarchy into local processes. We then present an efficient special-purpose algorithm to improve the local mining performance. We also extend our framework to support maximal and closed episode mining in the context of event hierarchy, and to the best of our knowledge, we are the first attempt to define and discover hierarchy-aware maximal and closed episodes. We implement the proposed framework on Apache Spark and conduct experiments on both synthetic and real-world datasets. Experimental results demonstrate the efficiency and scalability of the proposed approach and show that we can find practical patterns when taking event hierarchies into account.

Efficient User Guidance for Validating Participatory Sensing Data

Participatory sensing has become a new data collection paradigm that leverages the wisdom of the crowd for big data applications, without spending cost to buy dedicated sensors. It collects data from human sensors by using their own devices such as cell phone accelerometers, cameras, and GPS devices. This benefit comes with a drawback: human sensors are arbitrary and inherently uncertain due to the lack of quality guarantee. Moreover, participatory sensing data are time series that exhibit not only highly irregular dependencies on time but also high variance between sensors. To overcome these limitations, we formulate the problem of validating uncertain time series collected by participatory sensors. In this paper, we approach the problem by an iterative validation process on top of a probabilistic time series model. First, we generate a series of probability distributions from raw data by tailoring a state-of-the-art dynamical model, namely GARCH, for our joint time series setting. Second, we design a feedback process that consists of a adaptive aggregation model to unify the joint probabilistic time series and an efficient user guidance model to validate aggregated data with minimal effort. Through extensive experimentation, we demonstrate the efficiency and effectiveness of our approach on both real data and synthetic data. Highlights from our experiences include the fast running time of probabilistic model, the robustness of aggregation model to outliers, and the significant effort saving of guidance model.

Online Heterogeneous Transfer Learning by Knowledge Transition

In this paper, we study the problem of online heterogeneous transfer learning, where the objective is to make predictions for a target data sequence arriving in an online fashion, and some offline labeled instances from a heterogeneous source domain are provided as auxiliary data. The feature spaces of the source and target domains are completely different, thus the source data cannot be used directly to assist the learning task in the target domain. To address this issue, we take advantage of unlabeled co-occurrence instances as intermediate supplementary data to connect the source and target domains, and perform knowledge transition from the source domain into the target domain. We propose a novel online heterogeneous transfer learning algorithm called Online Heterogeneous Knowledge Transition (OHKT) for this purpose. In OHKT, we first seek to generate pseudo labels for the co-occurrence data based on the labeled source data, and then develop an online learning algorithm to classify the target sequence by leveraging the co-occurrence data with pseudo labels. Experimental results on real-world data sets demonstrate the effectiveness and efficiency of the proposed algorithm.

Motion-aware Compression and Transmission of Mesh Animation Sequences

With the increasing demand in using 3D mesh data over networks, supporting effective compression and efficient transmission of meshes have caught lots of attention in recent years. This paper introduces a novel compression method for 3D mesh animation sequences, supporting user-defined and progressive transmissions over networks. Our motion-aware approach starts with clustering animation frames based on their motion similarities, dividing a mesh animation sequence into fragments of varying lengths. This is done by a novel temporal clustering algorithm, which measures motion similarity based on the curvature and torsion of a space curve formed by corresponding vertices along a series of animation frames. We further segment each cluster based on mesh vertex coherence, representing topological proximity within an object under certain motion. To produce a compact representation, we perform intra-cluster compression based on Graph Fourier Transform (GFT) and Set Partitioning In Hierarchical Trees (SPIHT) coding. Optimized compression results can be achieved by applying GFT due to the proximity in vertex position and motion. We adapt SPIHT to support progressive transmission and design a mechanism to transmit mesh animation sequences with user-defined quality. Experimental results show that our method can obtain a high compression ratio while maintaining a low reconstruction error.

Predicting Academic Performance for College Students: A Campus Behavior Perspective

Detecting abnormal behaviors of students in time and providing personalized intervention and guidance at the early stage is important in educational management. Academic performance prediction is an important building block to enabling this pre-intervention and guidance. Most of the previous studies are based on questionnaire surveys and self-reports, which suffer from a small sample size and social desirability bias. In this paper, we collect longitudinal behavioral data from 6,597 students' smart cards and propose three major types of discriminative behavioral factors, diligence, orderliness, and sleep patterns. Empirical analysis demonstrates these behavioral factors are strongly correlated with academic performance. Furthermore, motivated by social influence theory, we analyze the correlation of each student's academic performance with his/her behaviorally similar students'. Statistical tests indicate this correlation is significant. Based on these factors, we further build a multi-task predictive framework based on a learning-to-rank algorithm for academic performance prediction. This framework captures inter-semester correlation, inter-major correlation and integrates student similarity to predict students' academic performance. The experiments on a large-scale real-world dataset show the effectiveness of our methods for predicting academic performance and the effectiveness of proposed behavioral factors.

Recognizing Multi-Agent Plans When Action Models and Team Plans Are Both Incomplete

Multi-Agent Plan Recognition (MAPR) aims to recognize team structures (which are composed of team plans) from the observed team traces (action sequences) of a set of intelligent agents. In this paper, we introduce the problem formulation of Multi-Agent Plan Recognition based on partially observed team traces, and present a weighted MAX-SAT based framework to recognize multi-agent plans from partially observed team traces with the help of two types of auxiliary knowledge to help recognize multi-agent plans, i.e., a library of \emph{incomplete} team plans and a set of \emph{incomplete} action models. Our framework functions with two phases. We first build a set of \emph{hard} constraints that encode the correctness property of the team plans, and a set of \emph{soft} constraints that encode the optimal utility property of team plans based on the input team trace, incomplete team plans and incomplete action models. After that, we solve all the constraints using a weighted MAX-SAT solver and convert the solution to a set of team plans that best \emph{explain} the structure of the observed team trace. We empirically exhibit both effectiveness and efficiency of our framework in benchmark domains from International Planning Competition (IPC).

Accounting for hidden common causes when inferring cause and effect from observational data

Hidden common causes make it difficult to infer causal relationships from observational data. Here, we consider a new method to account for a hidden common cause that infers its presence from the data. As with other approaches that can account for common causes, this approach is successful only in some cases. We describe such a case taken from the field of genomics, wherein one tries to identify which genomic markers causally influence a trait of interest.

Exploiting the Value of the Center-dark Channel Prior for Salient Object Detection

Saliency detection aims to detect the most attractive objects in images and is widely used as a foundation for various applications. In this paper, we propose a novel salient object detection algorithm for RGB-D images using center-dark channel priors. First, we generate an initial saliency map based on a color saliency map and a depth saliency map of a given RGB-D image. Then, we generate a center-dark channel map based on center saliency and dark channel priors. Finally, we fuse the initial saliency map with the center dark channel map to generate the final saliency map. Extensive evaluations over four benchmark datasets demonstrate that our proposed method performs favorably against most of the state-of-the-art approaches. Besides, we further discuss the application of the proposed algorithm in small target detection and demonstrate the universal value of center-dark channel priors in the field of object detection.

Detecting causal relationships in simulation models using intervention-based counterfactual analysis

Central to explanatory simulation models is their capability to not just show that but also why particular things happen. Explanation is closely related with the detection of causal relationships and is, in a simulation context, typically done by means of controlled experiments. However, for complex simulation models, conventional 'blackbox' experiments may be too coarse-grained to cope with spurious relationships. We present an intervention-based causal analysis methodology that exploits the manipulability of computational models and detects and circumvents spurious effects. The core of the methodology is a formal model that maps basic causal assumptions to causal observations and allows for the identification of combinations of assumptions that have a negative impact on observability. First experiments indicate that the methodology can successfully deal with notoriously tricky situations involving asymmetric and symmetric overdetermination and detect fine-grained causal relationships between events in the simulation. As illustrated in the paper, the methodology can be easily integrated into an existing simulation environment.

All ACM Journals | See Full Journal Index

Search TIST
enter search term and/or author name