Cetus Compiler Installation Instructions

Posted By admin On 26.10.19

The compiler that we recommend is the GNU Compiler collection or GCC. This is a widely used cross-platform compiler toolsuite that has libraries and compilers for C, C, Fortran, Java, and more. Additionally the compiler that we will use later on in the course for compiling C code to run on the PIC32 is based on GCC.

Chapter 1. Introduction Parallelizing compiler technology is most mature for the Fortran 77 language. The simplicity of the language without pointers or user-defined types makes it easy to analyze and to develop many advanced compiler passes.

By contrast, parallelization technology for modern languages, such as Java, C, or even C, is still in its infancy. When trying to engage in such research, we were faced with a serious challenge. We were unable to find a parallelizing compiler infrastructure that supports interprocedural analysis, exhibits state-of-the-art software engineering techniques to achieve shorter development time, and which would allow us to compile large, realistic applications. However, we feel these properties are of paramount importance.

They enable a compiler writer to develop 'production strength' passes. Production strength passes, in turn, can work in the context of the most up-to-date compiler technology and lead to compiler research that can be evaluated with full suites of realistic applications. Many people have observed and criticized the lack of such thorough evaluations in current research papers. The availability of an easy-to-use compiler infrastructure would help improve this situation significantly. Hence, continuous research and development in this area are among the most important tasks of the compiler community. Cetus contributes to this effort.

GCC GCC is one of the most robust compiler infrastructures available to the research community. GCC generates highly-optimized code for a variety of architectures, which rivals in many cases the quality generated by the machine vendor's compiler. Its open-source distribution and continuous updates make it attractive. However, GCC was not designed for source-to-source transformations.

Cetus

Most of its passes operate on the lower-level RTL representation. Only recent versions of GCC (version 3.0 onward) include an actual syntax tree representation, which has been used in Purdue class projects for implementing a number of compiler passes. Other limitations are GCC compiles one source file at a time, performs separate analysis of procedures, and requires extensive modification to support interprocedural analysis across multiple files. The most difficult problem faced by the students was that GCC does not provide a friendly API for pass writers. The API consists largely of macros.

Passes need to be written in C and operations lack logical grouping (classes, namespaces, etc), as would be expected from a compiler developed in an object-oriented language. GCC's IR has an ad-hoc type system, which is not reflected in its implementation language (C). The type system is encoded into integers that must be decoded and manipulated by applying a series of macros. It is difficult to determine the purpose of fields in the IR from looking at the source code, since in general every field is represented by the same type. This also makes it difficult for debuggers to provide meaningful information to the user.

Documentation for GCC is abundant. The difficulty is that the sheer amount easily overwhelms the user. Generally, we have found that there is a very steep learning curve in modifying GCC, with a big time investment to implement even trivial transformations. The above difficulties were considered primarily responsible for the students that used GCC proceeding more slowly than those creating a new compiler design. The demonstrated higher efficiency of implementation was the ultimate reason for the decision to pursue the full design of Cetus.

Polaris The Polaris compiler, which we have co-developed in prior work, was an important influence on the design of our new infrastructure. Polaris is written in C and operates on Fortran 77 programs.

Installation of officers

Cetus Compiler Installation Instructions Pdf

So far, no extensions have been made to handle Fortran 90, which provides a user-defined type system and other modern programming language features. Polaris' IR is Fortran-oriented and extending it to other languages would require substantial modification. Another factor we considered was that Polaris was written before the Standard Template Library (C STL) became available, so it includes its own container implementations. It uses a pre-ISO dialect of C which now seems unusual to programmers and causes many warnings (and sometimes errors) with current compilers. Both aspects limit its portability to other platforms. In general, Polaris is representative of compilers that are designed for one particular language, serve their purpose well, but are difficult to extend.

Cetus should not be thought of as 'Polaris for C' because it is designed to avoid that problem. However, there are still several Polaris features that we wanted to adopt in Cetus. Entrust decrypt software online. Polaris' IR can be printed in the form of code that is similar to the source program. This property makes it easy for a user to review and understand the steps involved in Polaris-generated transformations. Also, Polaris' API is such that the IR is in a consistent state after each call. Common mistakes that pass writers make can be avoided in this way.

SUIF The SUIF compiler is part of the National Compiler Infrastructure (NCI), along with Zephyr, whose design began almost a decade ago. The infrastructure was intended as a general compiler framework for multiple languages. It is written in C, like Polaris, and the currently available version supports analysis of C programs. SUIF 1 is a parallelizing compiler and SUIF 2 performs interprocedural analysis. Both SUIF and Cetus fall into the category of extensible source-to-source compilers, so at first SUIF looked like the natural choice for our infrastructure. Three main reasons eliminated our pursuit of this option.

The first was the perception that the project is no longer active - the last major release was in 2001 and does not appear to have been updated recently. The second reason was, although SUIF intends to support multiple languages, we could not find complete front ends other than for C and an old version of Java. Work began on front ends for Fortran and C, but they are not available in the current release. Hence, as is, SUIF essentially supports a single language, C.

Finally, we had a strong preference for using Java as the compiler implementation language. Java offers several features conducive to good software engineering. It provides good debugging support, high portability, garbage collection (contributing to the ease of writing passes), and its own automatic documentation system. These facts prompted us to pursue other compiler infrastructures. Chapter 3. License In recent years, there has been an explosion in the number of open-source licenses, and we do not wish to contribute further to that problem, but this was the only license upon which all involved parties were able to agree. We suggest that you do not use this license for projects that do not involve Cetus. The Cetus License is a modified Artistic License, similar to the Perl Artistic License.

Artistic Licenses are generally approved, although this specific license has not been submitted to OSI. You can view the license on the download page. Data Dependence Analysis Data dependence analysis is a memory disambiguation technique through which a compiler tries to establish dependence relations between scalar variables or between array accesses in a program. The existence of dependences or of independence, provides essential information for further analysis and legal transformations such as loop parallelization, loop tiling, loop interchange and so on. Cetus implements an array data dependence testing framework that includes loop-based affine array-subscript testing. The focus of this implementation is towards automatic loop parallelization. This framework acts as a wrapper around the conventional data-dependence test; we currently use the Banerjee-Wolfe inequalities (Optimizing Supercompilers for Supercomputers, Michael Wolfe) as our default dependence test.

The wrap-around framework provides Cetus with the flexibility of including other dependence tests, thereby improving the scope of our implementation; (we provide the Range test since v1.1). A whole program data-dependence graph provides the necessary output information and the interface, to enable subsequent analyses and transformations.

Initially, the algorithm traverses all loops in the program IR and each loop and its inner nest are tested for eligibility. The eligibility check allows us to broaden/narrow the scope of the implementation. Currently, the algorithm includes the following checks.

Canonical loops (Fortran DO loop format): This implements a canonical check for the loop initial assignment expression, the loop condition and the loop index increment statement. Function calls are supported with simple analysis.

Known function calls without side-effects can be added to a list of parallelizable calls (e.g. System calls). We also support simple interprocedural side-effect analysis. Control flow modifiers are not allowed (break statements can be handled specially for parallelization).

The loop increment must be an integer constant (symbolic increments are handled using range analysis) Loop information for all eligible loops is collected and stored internally for further dependence analysis. SYMBOLIC ANALYSIS SUPPORT: The Data dependence framework has added substantial support for using range analysis in the case of symbolic values. Range information is used to determine loop bounds and loop increments during eligibility testing. In the case of symbolic bounds, range information is used conservatively to test the maximum possible range of values. Symbolic information is also used to simplify array subscripts before they are provided to the Banerjee-Wolfe inequalities for testing. This symbolic support thus adds great value to dependence analysis, especially to the Banerjee test.

The algorithm then tests all array accesses within an entire loop nest for independence. Arrays identified by AliasAnalysis as aliased are currently assumed to be dependent in all directions for the enclosing loops. Pairs of array accesses such that at least one of them is a write access are provided as input to the dependence test which returns a set of direction vectors corresponding to the common enclosing loop nest for the pair of accesses (if independence could not be proved).

These direction vectors are then used to build a Data Dependence Graph (DDG) for the loop nest. The data dependence graph for each loop nest becomes a part of a larger whole program based graph which is attached to the Program IR object upon completion of dependence testing. This implementation is new, and substantially different from release v1.0.

For a user looking to use dependence information in an analysis or transformation pass, a reference to this DDG must be obtained from the Program object. A comprehensive API is then available via DDGraph (see Cetus javadoc) to extract dependence information related to loops. This implementation has the advantage that dependence testing is run fewer times and a common API is available to pass users. NOTE: However, it is the pass writer's responsibility that after a transformation on the IR has been performed, dependence testing must be re-run in order to create an up-to-date version of the dependence graph which automatically replaces itself in the Program IR. Alias Analysis Alias analysis is a technique used to identify sets of program variable names that may refer to the same memory location during program execution. In C programs, aliases are created through the use of pointers as well as reference parameter passing. Disambiguation of variable names can be crucial to the accuracy and the effectiveness of other analyses and transformations performed by the compiler.

Installation Instructions Gm

A compile-time alias analysis would either be flow and/or context sensitive or neither of the two. Cetus provides advanced flow-senstivity and limited context-sensitivity when generating alias information. Advanced alias information is now provided in Cetus using the result of interprocedural points-to analysis.

This information is flow-sensitive and hence, uses statement information. The result of the points-to analyzer is a map from a Statement to points-to information that exists right before the statement is executed. Pointer relationships are used by the Alias Analysis pass to create alias sets, which are obtained through a unification-based approach.

Alias information can be accessed by pass writers using a simple API. The presence of alias can be checked using functions that return a boolean value, after checking for aliasing between symbols. Refer to the HIR documentation for more information on symbols.

Separate Parsers Parsing intentionally was separated from the IR-building methods in the high-level interface so that other front ends could be added independently. Some front ends may require more effort than others. For example, writing a parser for C is a challenge because its grammar does not fit easily into any of the grammar classes supported by standard generators. The GNU C compiler was able to use an LALR(1) grammar, but it looks nothing like the ISO C grammar. If any rules must be rearranged to add actions in a particular location, it must be done with extreme care to avoid breaking the grammar. Another problem is C has much more complicated rules than C as far as determining which symbols are identifiers versus type names, requiring substantial symbol table maintenance while parsing.

Handling Pragmas and Comments Pragmas and Comments are identified during scanning as 'Annotation'-type IR. These are inserted by the parser into the IR as PreAnnotation(s). Comments are inserted as they appear in the program, except for when they appear in the middle of another IR construct, such as an AssignmentStatement. In this case, they appear in the output before the corresponding statement. For comments that are at the end of code on the same line, they appear AFTER the same line in the output. Since v1.1, Cetus adopts a new Annotation implementation in order to simplify the IR associated with different types of annotations as we begin to accommodate more types.

Officers

Once PreAnnotations are parsed in and stored in the IR, the AnnotationParser converts these into specific IR as described in the Annotations section later in this manual. PragmaAnnotations can be associated with specific statements, knowing their semantics, and hence this is done automatically by Cetus thus allowing movement of annotations with corresponding IR during transformations.

However, in the case of CommentAnnotations and other possibly new annotations interpreted as comments, Cetus can only store them as stand-alone annotations thus preventing their movement with corresponding IR. More details about the exact Annotation implementation is found in the IR section. Intermediate Representation Cetus' IR contrasts with the Polaris Fortran translator's IR in that it uses a hierarchical statement structure. The Cetus IR directly reflects the block structure of a program. Polaris lists the statements of each procedure in a flat way, with a reference to the outer statement being the only way for determining the block structure. There are also important differences in the representation of expressions, which further reflects the differences between C and Fortran.

The Polaris IR includes assignment statements, whereas Cetus represents assignments in the form of expressions. This corresponds to the C language's feature to include assignment side effects in any expression. The IR is structured such that the original source program can be reproduced, but this is where source-to-source translators face an intrinsic dilemma. Keeping the IR and output similar to the input will make it easy for the user to recognize the transformations applied by the compiler. On the other hand, keeping the IR language independent leads to a simpler compiler architecture, but may make it impossible to reproduce the original source code as output. In Cetus, the concept of statements and expressions are closely related to the syntax of the C language, facilitating easy source-to-source translation.

However, the drawback is increased complexity for pass writers (since they must think in terms of C syntax) and limited extensibility of Cetus to additional languages. That problem is mitigated by the provision of several abstract classes, which represent generic control constructs. Generic passes can then be written using the abstract interface, while more language-specific passes can use the derived classes. We feel it is important to work with multiple languages at an early stage, so that our result is not simply a design that is extensible in theory but also in practice.

Toward this goal, we have begun adding a C front end and generalizing the IR so that we can evaluate these design trade-offs. Other potential front ends are Java and Fortran 90. Class Hierarchy Design Our design goal was a simple IR class hierarchy easily understood by users. It should also be easy to maintain, while being rich enough to enable future extension without major modification. The basic building blocks of a program are the translation units, which represent the content of a source file, and procedures, which represent individual functions.

Procedures include a list of simple or compound statements, representing the program control flow in a hierarchical way. That is, compound statements, such as IF-constructs and FOR-loops include inner (simple or compound) statements, representing then and else blocks or loop bodies, respectively.

Expressions represent the operations being done on variables, including the assignments to variables. Relationship Between Grammar and IR When designing any class hierarchy, some general principles are followed.

The main principle is that a derived class satisfies an is a relationship with its parent, such that the data and methods of the parent make sense in the context of the child. This principle is followed to some extent in Cetus, but it would be more accurate to say that a derived class can appear in the grammar wherever its parent can appear. There is a distinction between the class hierarchy and the syntax tree. For example, in the syntax tree, the parent of a TranslationUnit is a Program, however neither TranslationUnit nor Program have a parent in the class hierarchy.

Syntax Tree Invariants One important aspect that makes an infrastructure useful is providing a good set of tools to help debug future compiler passes. Cetus provides basic debugging support through the Java language, which contains exceptions and assertions as built-in features.

Cetus executes within a Java virtual machine, so a full stack trace including source line numbers is available whenever an exception is caught or the compiler terminates abnormally. Such error reporting is useless unless the IR is designed to prevent programmers from corrupting the program representation. In other words, there must be a way of detecting the state of the IR is not as it should be, in order to report an error. To provide error checking and detect common errors by pass writers, the Cetus IR maintains several invariants. Violating one of the invariants below will probably result in an exception, to the extent that it is possible for Cetus to recognize what you have done.

CetusAnnotation (#pragma cetus.). OmpAnnotation (#pragma omp.). CommentAnnotation (e.g./). CodeAnnotation (Raw printing) Annotations can be attached to existing IR where the semantics define so or can be inserted as stand-alone IR.

In the above mentioned subclasses, Cetus and Omp Annotations have specific semantics and hence if present in the input source, they are attached to corresponding source IR. However, this is not possible for Comments as it is difficult to determine their association with the IR except for in certain cases (which are not handled currently).

Hence, all Comments and other annotations are inserted as stand-alone. Annotations that need to be attached to existing IR are stored as member objects of this Annotatable IR (Statements and Declarations), the new API for manipulating these is hence available through Annotatable. Standalone annotations are enclosed in special IR such as AnnotationDeclaration or AnnotationStatement (note that AnnotationStatement is different from previous release).

The API in Annotatable.COMPLETELY REPLACES. previous functionality provided through Tools.java. See the latest Cetus tutorial at for examples on how to use the new Annotation and Annotatable API during analysis and transformations. Expression Simplifier Expression simplifier supports normalization and simplification of expressions. It internally transforms a given expression to a normalized form so that each operand of a commutative and associative operation is located at the same height in the tree representation; in other words, it converts a binary tree to a n-ary tree. This internal representation makes the simplification algorithm easier and the result of the simplification is converted back to the original internal representation of Cetus. Like its predecessor Polaris, a key feature of Cetus is the ability to reason about the represented program in symbolic terms.

For example, compiler analyses and optimizations at the source level often require the expressions in the program to be in a simplified form. A specific example is data dependence analysis that collects the coefficients of affine subscript expressions, which are passed to the underlying data dependence test package. Cetus has functionalities that ease the manipulation of expressions for pass writers. The following example shows how to invoke the simplifier. The simplifier returns a new copy of the original expression that was converted to a normalized form. Import cetus.hir. Expression e =.

Cetus Compiler Installation Instructions Download

E = Symbolic.simplify(e); It is also possible for users to invoke individual simplification technique for their own purposes. The following examples show the functionality of the individual simplification technique. See the javadoc page or the source code to learn how to invoke each technique individually. 1+2.a+4-a - 5+a (folding) a.(b+c) - a.b+a.c (distribution) (a.2)/(8.c) - a/(4.c) (division). Printing A major complaint about early versions of Cetus was that the printing of IR was not flexible enough. To solve this problem, we have made printing completely customizable by the pass writer.

Nearly all classes implement a Printable interface, which means they provide a print method for printing themselves as source code. By default, this print method calls a static method of the same class, called defaultPrint. The call is made by invoking a Java Method object, which is similar to a function pointer in C. The Method object can be changed by calling setPrintMethod and passing it a different printing routine. SetPrintMethod allows a pass writer to change how one particular object prints itself.

If the pass writer notices that they are calling this method frequently for the same type of object, then it may be easier to call setClassPrintMethod, a static method which causes all newly created objects of a particular class to use the provided printing routine.

Books.google.com.tr - This book constitutes the thoroughly refereed post-conference proceedings of the 27th International Workshop on Languages and Compilers for Parallel Computing, LCPC 2014, held in Hillsboro, OR, USA, in September 2014. The 25 revised full papers were carefully reviewed and selected from 39 submissions. Languages and Compilers for Parallel Computing.