ACM DL

ACM Transactions on

Intelligent Systems and Technology (TIST)

Menu
Latest Articles

Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments

Eliminating the negative effect of non-stationary environmental noise is a long-standing research topic for automatic speech recognition but still... (more)

Mining Significant Microblogs for Misinformation Identification: An Attention-Based Approach

With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative... (more)

Dynamic Optimization of the Level of Operational Effectiveness of a CSOC Under Adverse Conditions

The analysts at a cybersecurity operations center (CSOC) analyze the alerts that are generated by... (more)

Exploiting Multilabel Information for Noise-Resilient Feature Selection

In a conventional supervised learning paradigm, each data instance is associated with one single class label. Multilabel learning differs in the way... (more)

Multiview Discrete Hashing for Scalable Multimedia Search

Hashing techniques have recently gained increasing research interest in multimedia studies. Most existing hashing methods only employ single features... (more)

Automatic Extraction of Behavioral Patterns for Elderly Mobility and Daily Routine Analysis

The elderly living in smart homes can have their daily movement recorded and analyzed. As different elders can have their own living habits, a... (more)

On Incremental High Utility Sequential Pattern Mining

High utility sequential pattern (HUSP) mining is an emerging topic in pattern mining, and only a few algorithms have been proposed to address it. In... (more)

Optimum Velocity Profile of Multiple Bernstein-Bézier Curves Subject to Constraints for Mobile Robots

This article deals with trajectory planning that is suitable for nonholonomic differentially driven... (more)

Integrate and Conquer: Double-Sided Two-Dimensional k-Means Via Integrating of Projection and Manifold Construction

In this article, we introduce a novel, general methodology, called integrate and conquer, for simultaneously accomplishing the tasks of feature extraction, manifold construction, and clustering, which is taken to be superior to building a clustering method as a single task. When the proposed novel methodology is used on two-dimensional (2D) data,... (more)

Combination Forecasting Reversion Strategy for Online Portfolio Selection

Machine learning and artificial intelligence techniques have been applied to construct online portfolio selection strategies recently. A popular and... (more)

NEWS

Recent TIST News: 

ACM Transactions on Intelligent Systems and Technology (TIST) is ranked No.1 in all ACM journals in terms of citations received per paper. Each paper published at TIST in the time span (from Jan. 2010 to Dec. 2014) has received 18 citations on average in ACM Digital Library in the past fiscal year (from July 1 2015 to June 30 2016).  

ACM Transactions on Intelligent Systems and Technology (TIST) has been a success story.  Submissions to the journal have increase 76 percent from 2013 to 2015, from 278 original papers and revisions to 488.  Despite this increase, the journal acceptance rate has remained at a steady rate of approximately 24 percent. Furthermore, the TIST Impact Factor increased from 1.251 in 2014 to 3.19 in 2016.  


Journal Metric (2016)

  • - Impact Factor: 3.19
  • - 5-year Impact Factor: 10.47
  • - Avg. Citations in ACM DL: 18 

About TIST

ACM Transactions on Intelligent Systems and Technology (ACM TIST) is a scholarly journal that publishes the highest quality papers on intelligent systems, applicable algorithms and technology with a multi-disciplinary perspective. An intelligent system is one that uses artificial intelligence (AI) techniques to offer important services (e.g., as a component of a larger system) to allow integrated systems to perceive, reason, learn, and act intelligently in the real world. READ MORE

Forthcoming Articles

ACM TIST Special Issue on Visual Analytics

Learning Urban Community Structures: A Collective Embedding Perspective with Periodic Spatial-temporal Mobility Graphs

Learning urban community structures refers to the efforts of quantifying, summarizing, and representing an urban community's (i) static structures, e.g., Point-Of-Interests (POIs) buildings and corresponding geographic allocations, and (ii) dynamic structures, e.g., human mobility patterns among POIs. By learning the community structures, we can better quantitatively represent urban communities and understand their evolutions in the development of cities. This can help us boost commercial activities, enhance public security, foster social interactions, and, ultimately, yield livable, sustainable and viable environments. However, due to the complex nature of urban systems, it is traditionally challenging to learn the structures of urban communities. To address this problem, in this paper, we propose a collective embedding framework to learn the community structure from multiple periodic spatial-temporal graphs of human mobility. Specifically, we first exploit a probabilistic propagation based approach to create a set of mobility graphs from periodic human mobility records. In these mobility graphs, the static POIs are regarded as vertexes, the dynamic mobility connectivity between POI pairs are regarded as edges, and the edge weights periodically evolve over time. A collective deep auto-encoder method is then developed to collaboratively learn the embeddings of POIs from multiple spatial-temporal mobility graphs. In addition, we develop a UGWA method (Unsupervised Graph based Weighted Aggregation), in order to align and aggregate the POI embeddings into the representation of the community structure. As an application, we apply the proposed embedding framework to rank high-rated residential communities to evaluate the performance of our proposed method. Extensive experimental results on real-world urban communities and human mobility data demonstrate the effectiveness of the proposed collective embedding framework.

High-Precision Camera Localization in Scenes with Repetitive Patterns

This paper presents a high-precision multi-modal approach for localizing moving cameras using monocular videos, which has wide potentials in many intelligent applications, e.g., robotics, autonomous vehicles, etc. Existing visual odometry methods often suffer from symmetric or repetitive scene patterns, e.g., windows on buildings or parking stalls. To address this issue, we introduce a robust camera localization method that contributes in two aspects. First, we formulate feature tracking, the critical step of visual odometry, as a hierarchical min-cost network flow optimization task, and regularize the formula with flow constraints, cross-scale consistencies, and motion heuristics. The proposed formula can adaptively select features or feature combinations over scale-space that are most distinctive, which is different from traditional methods that need to detect and group repetitive patterns in a separate step. Second, we further develop a joint formula for integrating dense visual odometry and sparse GPS readings in a shared reference coordinate. The fusion process is guided with high-order statistics knowledge to suppress the impacts of drifting issues. We evaluate the proposed method on both public video datasets and a newly created dataset that includes scenes full of repetitive patterns. Results with comparisons show that our method can clearly outperform the alternative methods and is effective for addressing repetitive pattern issues.

RelationLines: Visual Reasoning of Egocentric Relations from Heterogeneous Urban Data

The increased accessibility of urban sensor data and the popularity of social network applications is enabling the discovery of crowd mobility and personal communication patterns. However, studying the egocentric relationships of an individual (i.e., the egocentric relations) can be very challenging because available data may refer to direct contacts, such as phone calls between individuals, or indirect contacts, such as paired location presence. In this paper, we develop methods to integrate three facets extracted from heterogeneous urban data (timelines, calls and locations) through a progressive visual reasoning and inspection scheme. Our approach uses a detect-and-filter scheme, such that, prior to visual refinement and analysis, a coarse detection is performed to extract the target individual and construct the timeline of the target. It then detects spatio-temporal co-occurrences or call-based contacts to develop the egocentric network of the individual. The filtering stage is enhanced with a line-based visual reasoning interface that facilitates flexible and comprehensive investigation of egocentric relationships and connections in terms of time, space and social networks. The integrated system, RelationLines, is demonstrated using a dataset that contains taxi GPS data, cell-base mobility data, mobile calling data, microblog data and POI data of a city with millions of citizens. We conduct three case studies to examine the effectiveness and efficiency of our system.

SmartTransfer: Modeling the Spatiotemporal Dynamics of Passenger Transfers for Crowdedness-aware Route Recommendations

In urban transportation systems, transfer stations refer to hubs connecting a variety of bus and subway lines and, thus, are the most important nodes in transportation networks. The pervasive availability of large-scale travel traces of passengers, collected from automated fare collection (AFC) systems, has provided unprecedented opportunities for understanding citywide transfer patterns, which can benefit smart transportation, such as smart route recommendation to avoid crowded lines, and dynamic bus scheduling to enhance transportation efficiency. To this end, in this paper, we provide a systematic study of the measurement, patterns, and modeling of spatiotemporal dynamics of passenger transfers. Along this line, we develop a data-driven analytical system for modeling the transfer volumes of each transfer station. More specifically, we first identify and quantify the discriminative patterns of spatiotemporal dynamics of passenger transfers by utilizing heterogeneous sources of transfer related data for each station. Also, we develop a multi-task spatiotemporal learning model for predicting the transfer volumes of a specific station at a specific time period. Moreover, we further leverage the predictive model of passenger transfers to provide crowdedness-aware route recommendations. Finally, we conduct the extensive evaluations with a variety of real-world data. Experimental results demonstrate the effectiveness of our proposed modeling method and its applications for smart transportation.

Random-Forest Inspired Neural Networks

Neural networks have become very popular in recent years because of the astonishing success of deep learning in various domains such as image and speech recognition. In many of these domains, specific architectures of neural networks, such as convolutional networks, seem to fit the particular structure of the problem domain very well, and can therefore perform in an astonishingly effective way. However, the success of neural networks is not universal across all domains. Indeed, for learning problems without any special structure, or in cases where the data is somewhat limited, neural networks are known not to perform well with respect to traditional machine learning methods such as random forests. In this paper, we show that a carefully designed neural network with random forest structure can have better generalization ability. In fact, this architecture is more powerful than random forests, because the back-propagation algorithm reduces to a more powerful and generalized way of constructing a decision tree. Furthermore, the approach is efficient to train and requires a small constant factor of the number of training examples. This efficiency allows the training of multiple neural networks in order to improve the generalization accuracy. Experimental results on real-world benchmark datasets demonstrate the effectiveness of the proposed enhancements for classification and regression.

Understanding Event Organization at Scale in Event-based Social Networks

Understanding real-world event participation behavior has been a subject of active research and can offer valuable insights for event-related recommendation and advertisement. The emergence of event-based social networks (EBSNs), which attract online users to host/attend offline events, has enabled exciting new research in this domain. However, most existing works focus on understanding or predicting individual users' event participation behavior or recommending events to individual users. Few study has addressed the problem of event popularity from the event organizer's point of view. In this work, we study the latent factors for determining event popularity using large-scale datasets collected from the popular Meetup.com EBSN in five major cities around the world. We analyze and model four contextual factors: spatial factor using location convenience, quality, popularity density and competitiveness; group factor using group member entropy and loyalty; temporal factor using temporal preference and weekly event patterns; and semantic factor using readability, sentiment, part-of-speech and text novelty. In addition, we have developed a group-based social influence propagation network to model group-specific influences on events. By combining the COntextual features and Social Influence NEtwork, our integrated prediction framework COSINE can capture the diverse influential factors of event participation and can be used by event organizers to predict/improve the popularity of their events. Detailed evaluations demonstrate that our COSINE framework achieves high accuracy for event popularity prediction in all five cities with diverse cultures and user event behaviors.

Characterizing User Skills from Application Usage Traces with Hierarchical Attention Recurrent Networks

Predicting users' proficiencies is a critical component of AI-powered personal assistants. This paper introduces a novel approach for prediction based on users' diverse, noisy, and passively generated application usage histories. We propose a novel Bi-directional Recurrent Neural Network with multi-layer attention mechanism (m-ATT-BiRNN) to extract sequential patterns and distinguish informative traces from noise. Our model is able to attend to the most discriminative actions and sessions to make more accurate and directly interpretable predictions while requiring 50x less training data than the state-of-the-art sequential learning approach. We evaluate our model with two large scale datasets collected from 68K Photoshop users: a design skill dataset where the user skill is determined by the quality of the end products; and a software skill dataset where users self-disclose their software usage skill levels. The empirical results demonstrate our model's superior performance compared to existing user representation learning techniques that leverage action frequencies and sequential patterns. In addition, we qualitatively illustrate the model's significant interpretative power. The proposed approach is broadly relevant to applications that generate user time-series analytics.

DeepTracker: Visualizing the Training Process of Convolutional Neural Networks

Deep convolutional neural networks (CNNs) have achieved remarkable success in various fields. However, training an excellent CNN is practically a trial-and-error process that consumes a tremendous amount of time and computer resources. To accelerate the training process and reduce the number of trials, experts need to understand what has occurred in the training process and why the resulting CNN behaves as such. However, current popular training platforms, such as TensorFlow, only provide very little and general information, such as training/validation errors, which is far from enough to serve this purpose. To bridge this gap and help domain experts with their training tasks in a practical environment, we propose a visual analytics system, DeepTracker, to facilitate the exploration of the rich dynamics of CNN training processes and to identify the unusual patterns that are hidden behind the huge amount of training log. Specifically, we combine a hierarchical index mechanism and a set of hierarchical small multiples to help experts explore the entire training log from different levels of detail. We also introduce a novel cube-style visualization to reveal the complex correlations among multiple types of heterogeneous training data including neuron weights, validation images, and training iterations. Three case studies are conducted to demonstrate how DeepTracker provides its users with valuable knowledge in an industry-level CNN training process, namely in our case, training ResNet-50 on the ImageNet dataset. We show that our method can be easily applied to other state-of-the-art "very deep" CNN models.

Visual Analytics of Heterogeneous Data using Hypergraph Learning

For real-world learning tasks (e.g., classification), graph-based models are commonly used to fuse the information distributed in diverse data sources, which can be heterogeneous, redundant, and incomplete. These models represent the relations in different datasets as pairwise links. However, these links cannot deal with high-order relations which connect multiple objects (e.g., more than two patient groups admitted by the same hospital in 2014). In this paper, we propose a visual analytics approach for the classification of heterogeneous datasets using the hypergraph model. The hypergraph is an extension to traditional graphs in which a hyperedge connects multiple vertices instead of just two. We model various high-order relations in heterogeneous datasets as hyperedges and fuse different datasets with a uni ed hypergraph structure. The hypergraph learning algorithm is used for predicting the missing labels in the datasets. To allow users to inject their domain knowledge into the model-learning process, we augment the traditional learning algorithm in a number of ways. We also propose a set of visualizations which enable the user to construct the hypergraph structure and the parameters of the learning model interactively during the analysis. We demonstrate the capability of our approach via two real-world cases.

An Efficient Alternating Newton Method for Learning Factorization Machines

To date, factorization machines (FM) have emerged as a powerful model in many applications. In this work, we study the training of FM with the logistic loss for binary classification, which is a non-linear extension of the linear model with the logistic loss (i.e., logistic regression). For the training of large-scale logistic regression, Newton methods have been shown to be an effective approach, but it is difficult to apply such methods to FM because of the non-convexity. We consider a modification of FM that is multi-block convex and propose an alternating minimization algorithm based on Newton methods. Some novel optimization techniques are introduced to reduce the running time. Our experiments demonstrate that the proposed algorithm is more efficient than stochastic gradient algorithms and coordinate descent methods. The parallelism of our method is also investigated for the acceleration in multi-threading environments.

BayesPiles: Visualisation Support for Bayesian Network Structure Learning

We address the problem of exploring and comparing large collections of scored, directed networks for understanding inferred Bayesian networks used in biology. In this field, heuristic algorithms explore the space of possible network solutions, sampling this space based on algorithm parameters and a network score that encodes the statistical fit to the data. The goal of the analyst is to guide the heuristic search and decide how to determine a final consensus network structure, usually by selecting the top scoring network or constructing the consensus network from a collection of high scoring networks. BayesPiles, our visualisation tool, helps with understanding the structure of the solution space and supporting the construction of a final consensus network that is representative of the underlying data set. BayesPiles builds upon and extends MultiPiles to meet our domain requirements. We developed BayesPiles in conjunction with computational biologists who have used this tool on data sets used in their research. The biologists found our solution provides them with new insights and helps them achieve results that are representative of the underlying data.

Visual Interfaces for Recommendation Systems: Finding Similar and Dissimilar Peers

Recommendation applications can guide users in making important life choices by referring to the activities of similar peers. For example, students making academic plans may learn from the data of similar students, while patients and their physicians may explore data from similar patients to select the best treatment. Selecting an appropriate peer group has a strong impact on the value of the guidance that can result from analyzing the peer group data. In this paper, we describe a visual interface that helps users review the similarity and differences between a seed record and a group of similar records, and refine the selection. We introduce the LikeMeDonuts, Ranking Glyph, and History Heatmap visualizations. The interface was refined through three rounds of formative usability evaluation with 12 target users and its usefulness was evaluated by a case study with a student review manager using real student data. We describe three analytic workflows observed during use and summarize how users' input shaped the final design.

Few-Shot Text and Image Classification via Analogical Transfer Learning

Learning from very few samples is a challenge for machine learning tasks, such as text and image classification. Transfer learning attempts to address this problem by transferring prior knowledge from related domains to enhance the learning performance in the target domain. In previous transfer learning works, instance transfer learning algorithms mostly focus on selecting the source domain instances similar to the target domain instances for transfer. However, the selected instances usually do not directly contribute to the learning performance in the target domain.Hypothesis transfer learning algorithms focus on the model/parameter level transfer. They treat the source hypotheses as well-trained and transfer their knowledge in terms of parameters to learn the target hypothesis. Such algorithms directly optimize the target hypothesis by the observable performance improvements. However, they fail to consider the problem that instances contribute to the source hypotheses may be harmful for the target hypothesis, as instance transfer learning analyzed.To relieve the aforementioned problems, we propose a novel transfer learning algorithm which follows an analogical strategy. Particularly, the proposed algorithm first learns a revised source hypothesis with only instances contribute to the target hypothesis. Then, the proposed algorithm transfers both the revised source hypothesis and the target hypothesis (only trained with a few samples) to learn an analogical hypothesis. We denote our algorithm as Analogical Transfer Learning.Extensive experiments on one synthetic dataset and three real-world benchmark datasets demonstrate the superior performance of the proposed algorithm.

Discriminative and Orthogonal Subspace Constraints based Nonnegative Matrix Factorization

Nonnegative matrix factorization (NMF) is one widely used feature extraction technology in the tasks of image clustering and image classification. For the former task, various unsupervised NMF methods based on the data distribution structure information have been proposed. While for the later task, the label information of the dataset is one very important guiding. However, most previous proposed supervised NMF methods emphasis on imposing the discriminant constraints on the coefficient matrix. When dealing with new coming samples, the transpose or the pseudoinverse of the basis matrix is used to project these samples to the low dimension space. In this way, the label influence to the basis matrix is indirect. Although, there are also some methods try to constrain the basis matrix in NMF framework, either they only restrict within-class samples or impose improper constraint on the basis matrix. To Address these problems, in this paper a novel NMF framework named discriminative and orthogonal subspace constraints based nonnegative matrix factorization (DOSNMF) is proposed. In DOSNMF, the discriminative constraints are imposed on the projected subspace instead of the directly learned representation. In this manner, the discriminative information is directly connected with the projected subspace. At the same time, an orthogonal term is incorporated in DOSNMF to adjust the orthogonality of the learned basis matrix, which can ensure the orthogonality of the learned subspace and improve the sparseness of the basis matrix at the same time. This framework can be implemented in two ways. The first way is based on the manifold learning theory, in this way, two graphs, the intrinsic graph and the penalty graph, are constructed to capture the intraclass structure and the inter-class distinctness. In this way, both the manifold structure information and the discriminative information of the dataset are utilized. For convenience, we name this method as the name of the framework, i.e. DOSNMF. The second way is based on the Fishers criterion, we name it as Fishers criterion based DOSNMF (FDOSNMF). The object functions of DOSNMF and FDOSNMF can be easily optimized using multiplicative update (MU) rules. The new methods are tested on five datasets and compared with several supervised and unsupervised variants of NMF. The experimental results reveal the effectiveness of the proposed methods.

Goal and Plan Recognition Design for Plan Libraries

The paper provides new techniques for optimizing domain design for goal and plan recognition using plan libraries. We define two new problems: Goal Recognition Design for Plan Libraries (GRD-PL) and Plan Recognition Design (PRD). Solving the GRD-PL helps to infer which goal the agent is trying to achieve, while solving PRD can help to infer how the agent is going to achieve its goal. For each problem, we define a worst-case distinctiveness measure that is an upper bound on the number of observations that are necessary to unambiguously recognize the agent's goal or plan. The paper studies the relationship between these measures, showing that the worst-case distinctiveness of GRD-PL is a lower bound of the worst-case plan distinctiveness of PRD, and that they are equal under certain conditions. We provide two complete algorithms for minimizing the worst-case distinctiveness of plan libraries without reducing the agent's ability to complete its goals: One is a brute force search over all possible plans and one a constraint-based search that identifies plans that are most difficult to distinguish in the domain. These algorithms are evaluated in three hierarchical plan recognition settings from the literature. We were able to reduce the worst case distinctiveness of the domains using our approach, in some cases reaching 100% improvement within a predesignated time window. Our iterative algorithm outperforms the brute force approach by an order of magnitude in terms of runtime.

X-CLEAVER: Learning Ranking Ensembles by Growing and Pruning Trees

Learning-to-Rank (LtR) solutions are commonly used in large-scale information retrieval systems such as Web search engines where high-quality documents need to be returned in response to a user query within a fraction of a second. The most effective LtR algorithms, e.g., »-MART, adopt a gradient boosting approach to build an additive ensemble of weighted regression trees. Since the required ranking effectiveness is achieved with very large ensembles, the impact on response time and query throughput of these solutions is not negligible. In this paper we propose X-CLEaVER, an iterative meta-algorithm able to build more efficient and effective ranking ensembles. X-CLEaVER interleaves the iterations of a given ensemble learning algorithm with pruning and re-weighting phases. First, redundant trees are removed from the ensemble generated, then the weights of the remaining trees are fine-tuned by optimizing the desired ranking loss function. We propose and analyse several pruning strategies and assess their bene ts showing that interleaving pruning and re-weighting phases during learning is more effective than applying a single post-learning optimization step. Experiments conducted using two publicly available LtR datasets show that X-CLEaVER is very effective in optimizing »-MART models both in terms of effectiveness and efficiency.

Enumerating Connected Subgraphs and Computing the Myerson and Shapley Values in Graph-restricted Games

At the heart of multi-agent systems is the ability to cooperate in order to improve the performance of individual agents and/or the system as a whole. While a widespread assumption in the literature is that such cooperation is essentially unrestricted, in many realistic settings this assumption does not hold. A highly-influential approach for modelling such scenarios are graph-restricted games introduced by Myerson. In this approach, agents are represented by nodes in a graph, edges represent communication channels, and a group can generate an arbitrary value only if there exists a direct or indirect communication channel between every pair of agents within the group. Two fundamental solution concepts that were proposed for such games are the Myerson value and the Shapley value. While an algorithm has been developed to compute the Shapley value in arbitrary graph-restricted games, no such general-purpose algorithm has been developed for the Myerson value to date. With this in mind, we set to develop for such games a general-purpose algorithm to compute the Myerson value, and a more efficient algorithm to compute the Shapley value. Since the computation of either value involves enumerating all connected induced subgraphs of the games underlying graph, we start by developing an algorithm dedicated to this enumeration, and show empirically that it is faster than the state of the art in the literature. Finally, we present a sample application of both algorithms, in which we test the Myerson value and the Shapley value as advanced measures of node centrality in networks.

Learning Facial Expressions with 3D Mesh Convolutional Neural Network

Making machines understand human expressions enables various useful applications in human-machine interaction. In this paper, we present a novel facial expression recognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visual analytics guided 3DMCNN design and optimization scheme. From a RGBD camera, we first reconstruct a 3D face model of a subject with facial expressions and then compute the geometric properties of the surface. Instead of using regular Convolutional Neural Network (CNN) to learn intensities of the facial images, we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We design a geodesic distance-based convolution method to overcome the difficulties raised from the irregular sampling of the face surface mesh. We further present an interactive visual analytics for the purpose of designing and modifying the networks to analyze the learned features and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network, the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and analyze the effectiveness of our method by studying representative cases. Testing on public datasets, our method achieves a higher recognition accuracy than traditional image-based CNN and other 3D CNNs. The proposed framework, including 3DMCNN and interactive visual analytics of the CNN, can be extended to other applications.

Traffic Simulation and Visual Verification in Smog

Smog causes low visibility on the road and it can impact the safety of traffic. Modeling traffic in smog will have a significant impact on realistic traffic simulation. Most of the existing traffic models assume that drivers have optimal vision in the simulations. These simulations are not suitable for modeling smog weather conditions. In this paper, we introduce the smog full velocity difference model (SMOG-FVDM) for a realistic simulation of traffic in smog weather conditions. In this model, we present a stadia model for drivers in smog weather. We then introduce it into the car-following traffic model through ``Psychological Force'' and ``Body Force'', and then introduce the SMOG-FVDM. Considering that there are lots of parameters in the SMOG-FVDM, we design a visual verification system based on the SMOG-FVDM to get an adequate solution, which can show visual simulation results in different road scenarios and different smog degrees by reconciling the parameters. Experiments results show that our model can give a realistic and efficient traffic simulation in smog weather conditions.

CapVis: Towards Better Understanding of Visual-Verbal Saliency Consistency

When looking at an image, humans shift their attention towards interesting regions, making sequences of eye fixations. When describing an image, they also come up with simple sentences that highlight the key elements in the scene. What is the correlation between where people look and what they describe in an image? To investigate this problem intuitively, we develop a visual analytics system CapVis to look into eye fixations and image captions, two types of subjective annotations that are relatively task-free and natural. From the annotations, we propose a word-weighting scheme to extract visual and verbal saliency ranks to compare against each other. In our approach, a number of low-level and semantic-level features relevant to the visual-verbal saliency consistency are proposed and visualized in multiple facts for better understanding of image content. Our method also shows the different ways human and computational model look and describe, which provides reliable information for the diagnosis of captioning model. Experiment also shows that the visualized feature can be integrated into a computational model, to effectively predict the consistency between the two modalities on image dataset with both types of annotations.

A Cross-Domain Recommendation Mechanism for Cold-Start Users Based on Partial Least Squares Regression

Recommender systems are common in the e-commerce platforms in recent years. Recommender systems are able to help users find preferential items among a large amount of products so that users' time is saved and sellers' profits are increased. Cross-domain recommender systems aim to recommend items based on users' different tastes across domains. While recommender systems usually suffer from the user cold-start problem the leads to unsatisfying recommendation performance, cross-domain recommendation can remedy such problem. This paper proposes a novel cross-domain recommendation model based on regression analysis, partial least squares regression (PLSR). The proposed recommendation models, PLSR-CrossRec and PLSR-Latent, are able to purely use source-domain ratings predict the ratings for cold-start users who never rated items in the target domains. Experiments conducted on the Epinions dataset with ten various domains' rating records demonstrate that PLSR-Latent can outperform several matrix factorization-based competing methods under a variety of cross-domain settings. The time efficiency of PLSR-Latent is also satisfactory.

ResumeVis: A Visual Analytics System to Discover Semantic Information in Semi-structured Resume Data

Massive public resume data emerging on the Internet indicates individual-related characteristics in terms of profile and career experiences. Resume Analysis (RA) provides opportunities for many applications, such as recruitment trend predict, talent seeking and evaluation. Existing RA studies either largely rely on the knowledge of domain experts, or leverage classic statistical or data mining models to identify and filter explicit attributes based on pre-defined rules. However, they fail to discover the latent semantic information from semi-structured resume text, i.e., individual career progress trajectory and social-relations, which are otherwise vital to comprehensive understanding of peoples career evolving patterns. Besides, when dealing with massive resumes, how to properly visualize such semantic information to reduce the information load and to support better human cognition is also challenging. To tackle these issues, we propose a visual analytics system ResumeVis to mine and visualize resume data. Firstly, a text-mining based approach is presented to extract semantic information. Then, a set of visualizations are devised to represent the semantic information in multiple perspectives. By interactive exploration on ResumeVis performed by domain experts, the following tasks can be accomplished: to trace individual career evolving trajectory; to mine latent social-relations among individuals; and to hold the full picture of massive resumes' collective mobility. Case studies with over 2500 online officer resumes demonstrate the effectiveness of our system.

Adaptive Online One-Class Support Vector Machines with Applications in Structural Health Monitoring

One-class support vector machine (OCSVM) has been widely used in the area of structural health monitoring, where only data from one class (i.e. healthy) are available. Incremental learning of OCSVM is critical for online applications in which huge data streams continuously arrive and the healthy data distribution may vary over time. This paper proposes a novel adaptive self-advised online OCSVM, which incrementally tunes the kernel parameter and decides whether a model update is required or not. As opposed to existing methods, this novel online algorithm does not rely on any fixed threshold, but it uses the slack variables in the OCSVM to determine which new data points should be included in the training set and trigger a model update. The algorithm also incrementally tunes the kernel parameter of OCSVM automatically based on the spatial locations of the edge and interior samples in the training data with respect to the constructed hyperplane of OCSVM. This new online OCSVM algorithm was extensively evaluated using synthetic data and real data from case studies in structural health monitoring. The results showed that the proposed method significantly improved the classification error rates, was able to assimilate the changes in the positive data distribution over the time, and maintained a high damage detection accuracy in all case studies.

TPM: A Temporal Personalized Model for Spatial Item Recommendation

With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important way of helping users discover interesting locations to increase their engagement with location-based services. The availability of spatial, temporal and social information in LBSNs offers an unprecedented opportunity to enhance the spatial item recommendation. Many previous work studied spatial and social influences on spatial item recommendation in LBSNs. Due to the strong correlations between a users check-in time and the corresponding check-in location, which include the sequential influence and temporal cyclic effect, it is essential for spatial item recommender system to exploit the temporal effect to improve the recommendation accuracy. Leveraging temporal information in spatial item recommendation is, however, very challenging, considering 1) when integrating sequential influences, users' check-in data in LBSNs has a low sampling rate in both space and time, which renders existing location prediction techniques on GPS trajectories ineffective and the prediction space is extremely large, with millions of distinct locations as the next prediction target, which impedes the application of classical Markov chain models; 2) there are various temporal cyclic patterns (i.e., daily, weekly and monthly) in LBSNs, but existing work is limited to one specific pattern; and 3) there is no existing framework that unifies users' personal interests, temporal cyclic patterns and the sequential influence of recently visited locations in a principled manner. In light of the above challenges, we propose a Temporal Personalized Model (TPM) which introduces a novel latent variable topic-region to model and fuse sequential influence, cyclic patterns with personal interests in the latent and exponential space. The advantages of modeling the temporal effect at the topic-region level include a significantly reduced prediction space, an effective alleviation of data sparsity and a direct expression of the semantic meaning of users' spatial activities. Moreover, we introduce two methods to model the effect of various cyclic patterns. The first method is a time indexing scheme which encodes the effect of various cyclic patterns into a binary code. However, the indexing scheme faces the data sparsity problem in each time slice. To deal with this data sparsity problem, the second method slices the time according to each cyclic pattern separately and explores these patterns in a joint additive model. Furthermore, we design an asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online top-k recommendation process by extending the traditional LSH.

All ACM Journals | See Full Journal Index

Search TIST
enter search term and/or author name