

In this article, we focus on a family of modular heap analyses that summarize a procedure’s heap effects using a context-independent, shape-graph-like summary that is agnostic to the aliasing in the input heap. Modular heap analysis techniques analyze a program by computing summaries for every procedure in the program that describes its effects on an input heap, using pre-computed summaries for the called procedures. We have also compared DynSum with a static approach, which is referred to StaSum here, to show its improved scalability for the same three clients. DynSum's average speedups are 1.95x, 2.28x and 1.37x, respectively. We have compared DynSum with RefinePTS, a refinement-based analysis, using three clients (safe casting, null dereferencing and factory methods) for a suite of nine Java programs. The novelty lies in initially performing a Partial Points-To Analysis (PPTA) within a method, which is field-sensitive but context-independent, to summarize its local points-to relations encountered during a query and reusing this information later in the same or different calling contexts. We present an approach, called DynSum, to perform context-sensitive demand-driven points-to analysis fully on-demand by means of computing CFL-reachability summaries without any precision loss. These techniques achieve high precision efficiently for a small number of queries raised in small programs but may still be too slow in answering many queries for large programs in a context-sensitive manner. Modern demand-driven points-to or alias analysis techniques rest on the foundation of Context-Free Language (CFL) reachability.


Static analyses can be typically accelerated by reducing redundancies. Our results show that 61% of the dynamic execution of studied benchmarks can be parallelized with our techniques compared to 27% using traditional thread-level speculation techniques, resulting in a speedup of 1.84 on a four core system compared to 1.41 without transformations.
#Scratchpad salesforce 13m series craft code
We adapt and extend several code transformations from the instruction-level and scientific parallelization communities to uncover the hidden parallelism. We show that substantial amounts of loop-level parallelism is available in general-purpose applications, but it lurks beneath the surface and is often obfuscated by a small number of data and control dependences. In this paper, we take another look at exploiting loop-level parallelism in single-threaded applications. However, this approach has lead to only modest performance gains. Thread-level speculation overcomes the problem of memory dependence analysis by speculating unlikely dependences that serialize execution. There is a long history of automatic parallelization for scientific applications, but the techniques have generally failed in the context of general-purpose software. One solution to this problem is automatic parallelization that frees the programmer from the difficult task of parallel programming and offers hope for handling the vast amount of legacy single-threaded software. Applications with large amounts of explicit thread-level parallelism naturally scale performance with the number of cores, but single-threaded applications realize little to no gains with additional cores. Given the common practice of treating recursive subgraphs context-insensitively, its accuracy is equivalent to anĪnalysis which completely inlines all procedure calls.Īs multicore systems become the dominant mainstream computing technology, one of the most difficult challenges the industry faces is the software. The largest C benchmark in SPEC 2000, our analysis takes 190 seconds as opposed to 44 seconds for the context-insensitiveĪnalysis. The implemented context-sensitiveĪnalysis exhibits scalability comparable to that of its context-insensitive counterpart. Inherent to even a context-insensitive analysis allowing for an accurate and efficient top-down phase. In the style of Andersen that is simultaneously bottom-up and top-down context-sensitive. This transformation results in an efficient two-phase pointer analysis In a manner that permits their subsequent removal. The problems that procedural side effects cause, we developed a bottom-up phase that constructs concise procedure summaries These two phases can be independently context-sensitive. Then, the second phase, or top-down phase, computes the actual pointer information. The first phase, or bottom-up phase, propagates procedure summaries from callees to callers. This paper addresses scalability and accuracy of summary-based context-sensitive pointer analysis formulated as a two-phaseĬomputation.
