ACM Transactions on

Software Engineering and Methodology (TOSEM)

Latest Articles


Runtime Fault Detection in Programmed Molecular Systems

Watchdog timers are devices that are commonly used to monitor the health of safety-critical hardware and software systems. Their primary function is... (more)

Understanding and Analyzing Java Reflection

Java reflection has been widely used in a variety of applications and frameworks. It allows a software system to inspect and change the behaviour of its classes, interfaces, methods, and fields at runtime, enabling the software to adapt to dynamically changing runtime environments. However, this dynamic language feature imposes significant... (more)

Domain Analysis and Description Principles, Techniques, and Modelling Languages

We present a method for analysing and describing domains. By a domain we shall understand a rationally describable segment of a human assisted... (more)

Status Quo in Requirements Engineering: A Theory and a Global Family of Surveys

Requirements Engineering (RE) has established itself as a software engineering discipline over the past decades. While researchers have been investigating the RE discipline with a plethora of empirical studies, attempts to systematically derive an empirical theory in context of the RE discipline have just recently been started. However, such a theory is needed if we are to define and motivate... (more)

Isolation Modeling and Analysis Based on Mobility

In a mobile system, mobility refers to a change in position of a mobile object with respect to time and its reference point, whereas isolation means... (more)

How Understandable Are Pattern-based Behavioral Constraints for Novice Software Designers?

This article reports a controlled experiment with 116 participants on the understandability of representative graphical and textual pattern-based... (more)


ACM Transactions on Software Engineering and Methodology announces new 'fast-impact' track

TOSEM will launch the new fast-impact track an upper bounded turnaround time for papers that qualify as journal first papers and will not exceed a reasonable length. The track will be launched soon after ICSE.


ACM Transactions on Software Engineering and Methodology announces new ACM TOSEM board of distinguished reviewers

The new TOSEM board of distinguished reviewers guarantees timely and high quality reviews, and acknowledges the invaluable work of many reviewers.

Please use this form to nominate YOURSELF for TOSEM BOARD OF DISTINGUISHED REVIEWERS. Self-nominations encouraged. 

read more


ACM Transactions on Software Engineering and Methodology 
Names Mauro Pezzè as

ACM Transactions on Software Engineering and Methodology (TOSEM) welcomes Mauro Pezzè as new Editor-in-Chief, for the term January 1, 2019 to December 31, 2021.

read more
Forthcoming Articles
Developing and Evaluating Objective Termination Criteria for Random Testing

Random testing is a black-box software testing technique through which programs are tested by generating and executing random inputs. Because of its unstructured nature, it is difficult to determine when to stop a random testing process. Faults may be missed if the process is stopped prematurely; and resources may be wasted if the process is run too long. To this aim, in this paper we propose two promising termination criteria, 'All Equivalent' (AEQ) and 'All Included in One' (AIO), applicable to random testing. These criteria stop random testing once the process has reached a code-coverage based saturation point after which additional testing effort is unlikely to provide additional effectiveness. We model and implement them in the context of a general random testing process composed of independent random testing sessions. 36 experiments involving GUI testing and unit testing of Java applications have demonstrated that the AEQ criteria is generally able to stop the process when a code coverage equal or very near to the saturation level is reached, while AIO is able to stop the process earlier in cases it reaches the saturation level of coverage. In addition, the two criteria generally outperform some time-based termination criteria adopted in literature.

Efficient verification of concurrent systems using synchronisation analysis and SAT/SMT solving

This paper investigates how the use of approximations can make the formal verification of concurrent system scalable. We propose the idea of \emph{synchronisation analysis} to automatically capture global invariants and approximate reachability. We calculate invariants on how components participate on global system synchronisations and use a notion of consistency between these invariants to establish whether components can effectively communicate to reach some system state. Our synchronisation-analysis techniques try to show either that a system state is unreachable by demonstrating that components cannot agree on the order they participate in system rules, or that a system state is unreachable by demonstrating components cannot agree on the number of times they participate on system rules. These fully automatic techniques are applied to check deadlock and local-deadlock freedom in the \PairStatic{} framework. It extends \Pair{} (a recent framework where we use pure pairwise analysis of components and SAT checkers to check deadlock and local-deadlock freedom) with techniques to carry out synchronisation analysis. So, unlike \Pair{}, it can leverage \emph{global} invariants found by synchronisation analysis, thereby improving the reachability approximation and tightening our verifications. We implement \PairStatic{} in our DeadlOx tool using SAT/SMT and demonstrate the improvements they create in checking (local-)deadlock freedom.

Automated N-way Program Merging for Facilitating Family-Based Analyses of Variant-Rich Software

Family-based-analysis strategies have recently shown very promising potentials for improving efficiency in applying quality-assurance techniques to variant-rich programs, as compared to variant-by-variant approaches. These strategies require a single program representation superimposing all program variants in a syntactically well-formed, semantically sound and variant-preserving manner, which is usually not available and manually hard to obtain in practice. We present a novel methodology, called SiMPOSE, for automatically generating superimpositions of existing program variants to facilitate family-based analyses of variant-rich software. Our N-way model-merging methodology integrates control-flow automaton (CFA) representations of N variants of a C program into one unified CFA representation, constituting a unified program abstraction as used internally by many recent software-analysis tools. To cope with the complexity of N-way merging, we (1) utilize principles of similarity-propagation to reduce the number of N-way matches, and (2) decompose sets of N variants into subsets thus enabling incremental N-way merging. In our experimental evaluation, we apply our SiMPOSE tool to collections of realistic C programs. The results reveal very impressive efficiency improvements of family-based analyses, by an average factor of up to 2.9 for unit-test generation and 2.4 for model-checking, as compared to variant-by-variant practices, thus clearly amortizing the additional effort for N-way merging.

The Virtual Developer: Integrating Code Generation and Manual Development with Conflict Resolution

Model Driven Development (MDD) requires proper tools to derive the implementation code from the application models. However, the integration of manually programmed and automatically generated code is a long-standing issue, which affects the adoption of MDD in the industry. This paper presents a model and code co-evolution approach that addresses such a problem a posteriori, using the standard collision detection capabilities of Version Control Systems to support the semi-automatic merge of the two types of code. We assess the proposed approach by contrasting it with the more traditional template-based, forward engineering process, adopted by most MDD tools.

Editorial for TOSEM 28:3

Verifying and Quantifying Side-Channel Resistance of Masked Software Implementations

Neural Network Based Detection of Self-admitted Technical Debt: From Performance to Explainability

Self-admitted technical debt(SATD) has been proposed to identify debt that is intentionally introduced during software development. Previous studies leveraged human-summarized patterns or text mining techniques to detect SATD in source code comments. However, several characteristics of SATD features in code comments, e.g., vocabulary diversity, project uniqueness, length and semantic variations, pose a big challenge to the accuracy of pattern or text-mining based SATD detection. Furthermore, although text-mining based method outperforms pattern-based method in prediction accuracy, the text features it uses are less intuitive than human-summarized patterns, which makes the prediction results hard to explain. To improve the accuracy of SATD prediction, we propose a Convolutional Neural Network (CNN)-based approach to detect SATD comments. To improve the explainability of our model's prediction results, we exploit the computational structure of CNNs to identify key phrases and patterns in code comments that are most relevant to SATD. We conducted an extensive set of experiments with 62,566 code comments from 10 open-source projects and a user study with 150 comments of another three projects. Our evaluation confirms the effectiveness of different aspects of our approach and its superior performance, generalizability, adaptability and explainability over current state-of-the-art traditional text-mining based methods for SATD classification.

An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation

Millions of open-source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs. To explore such a potential, we perform an empirical study to assess the feasibility of using Neural Machine Translation techniques for learning bug-fixing patches for real defects. First, we mine millions of bug-fixes from the change histories of projects hosted on GitHub, in order to extract meaningful examples of such bug-fixes. Next, we abstract the buggy and corresponding fixed code, and use them to train an Encoder-Decoder model able to translate buggy code into its fixed version. In our empirical investigation we found that such a model is able to fix thousands of unique buggy methods in the wild. Overall, this model is capable of predicting fixed patches generated by developers in 9-50% of the cases, depending on the number of candidate patches we allow it to generate. Also, the model is able to emulate a variety of different Abstract Syntax Tree operations and generate candidate patches in a split second.

Automated Reuse of Model Transformations through Typing Requirements Models

Model transformations are key in model-driven engineering, where they automate the manipulation of models.However, they are typed with respect to concrete source and target metamodels, making their reuse for other (even similar) metamodels challenging. For this purpose, we propose capturing the typing requirements for reusing a transformation with other metamodels by the notion of typing requirements model (TRM). A TRM describes the prerequisites that a transformation imposes on source and target metamodels to obtain a correct typing. This way, any metamodel pair that satisfies the TRM is a valid reuse context for the transformation. A TRM is made of two domain requirement models (DRMs) describing the requirements for source and target metamodels, and a compatibility model expressing dependencies between them. We define a notion of refinement between DRMs, seeing metamodels as a special case of DRM. We provide a catalogue of refinements and describe how to automatically extract a TRM from an ATL transformation. The approach is supported by our tool TOTEM. We report on two experiments -- based on transformations developed by third parties and metamodel mutation -- validating the correctness and completeness of TRM extraction and confirming the power of TRMs to encode variability and support flexible reuse.


ACM Transactions on Software Engineering and Methodology (TOSEM) is part of the family of journals produced by the ACM, the Association for Computing Machinery.

TOSEM publishes one volume yearly. Each volume is comprised of four issues, which appear in January, April, July and October.

All ACM Journals | See Full Journal Index

Search TOSEM
enter search term and/or author name