C++ compilation

"Ross A. Finlayson" <raf@tiki-lounge.com.invalid>
Mon, 13 Oct 2008 17:43:23 -0700

          From comp.compilers

Related articles
C++ compilation raf@tiki-lounge.com.invalid (Ross A. Finlayson) (2008-10-13)
| List of all articles for this month |

From: "Ross A. Finlayson" <raf@tiki-lounge.com.invalid>
Newsgroups: comp.compilers
Date: Mon, 13 Oct 2008 17:43:23 -0700
Organization: Aioe.org NNTP Server
Keywords: C++, question
Posted-Date: 14 Oct 2008 06:32:47 EDT


I am working on some program tools and have some ideas for

I plan to use the memoizing packrat parsers with the objects'
implementations of the transcoding with the path transfer alignments
on the code fragment parsing with the context parsing along
unsatisfied symbols, with the objects exporting their serialization

To that end I implemented a type model of the C++ language with the
files for each of the classes of keywords for the language and so on.

Then, I can parse streams in this manner with reading in declarations
and definitions and so on in the C++ with the compilation.

Then there is to be basically use the semantic tree in memory, so that
has along with it a lot of memoizing in the parsers, which later read
the semantic tree serialized to memory, where there is the tree
alignment and copying and sharing of iterators and so on.

I define the blocks of definitions in terms of structure with the
grouping operators of the language in the block separation. However,
I've been working more on this way than the way to implement the
runtime type system for the compiler. I have been defining the block
composition structurally for code fragment parsing instead of top down
parsing, where, in the maintenance of the library symbols, there is
much to be satisfied beyond the program correctness.

Then, I have the general notions to use completely different calling
conventions than the market compilers because I plan to use those and
other conventions with the data, and generate the self-modifying code
that emits its compilation tree as a source file.

That is, in a compiler like C++, there needs to be interpretation of
the object model with the conventions of the C++ objects in their
references with the parameters and so on. There are conventions of
the memory alignment and object placement for the functions in the
stack in the local call addressing.

Then, for compiler facilities there are situations like examining the
library environment as so on, as well as gathering reference program
data on retesting with the reversibility in the general
instrumentation of the C++ language constructs.

The idea there is to retarget code to program blocks in non-C++
runtimes, as well as use C++ runtimes.

Then, for something like the function definition in C++, there would
be these other attributes and so on where the general facilities of
the eventual language expansion should be built into the compiler.
With the objects having their own implementations of parsers, then
there is the mapping of the parse trees to the gathered trees, or
using the gathered trees, their description as parts of the function
type system are to be reducible to primitives where it is good to have
small codes along the features of the compiler in the code analysis
and translation.

Here my idea is that the parser should be in layers, so that there is
then loading of the compilation blocks into memory then the parsing of
the source code.

It is perhaps simpler than might be thought, looking for markers in
the code of source symbols, and passing the ranges to those parsers on
the next level, then there is composition of the above trees with the
tree organization in the code event.

As I am trying to write compiler systems, I wonder about implementing
a C++ parsing strategy that I have.

04/18/2008 12:14 AM 1,690 containers.h
04/18/2008 12:27 AM 273 data.h
04/16/2008 11:10 PM 135 emitter.h
04/20/2008 02:15 PM 352 mark.h
07/23/2003 06:52 PM 0 mark_finder.h
04/19/2008 01:14 AM 372 parser.h
04/18/2008 12:11 AM 439 pointermap.h
04/18/2008 12:14 AM 467 pointers.h
09/28/2008 10:03 PM 232 recognizer.h
04/18/2008 12:18 AM 382 sample.h
04/16/2008 11:09 PM 267 sink.h
04/16/2008 11:08 PM 277 source.h
04/16/2008 11:14 PM 103 symbol.h
04/16/2008 11:13 PM 196 symbol_class.h
                              14 File(s) 5,185 bytes
                                0 Dir(s) 55,327,002,624 bytes free

The pointers.h file is as so:

#ifndef h_pointers_h
#define h_pointers_h h_pointers_h

template <typename T> class cptr{ // class instance pointer, deleted by

          T* ptr;
          operator T(){return *ptr;}
          T operator =(const T & rhs){ *ptr = rhs; return *ptr;}

template <typename T> class xptr{ // shared reference counting
typedef refcount_t size_t;
          T* ptr;
          refcount_t m_refcount;
          operator T(){return *ptr;}
          T operator =(const T & rhs){ *ptr = rhs; return *ptr;}

#endif /* h_pointers_h */

This code is parsing over token groups that the objects provide.

Then, for the C++,

basespecifier.h clearable.h constness.h containers.h
declaration.h definition.h enum_base.h identifier.h
initializer.h inlinity.h istreamable.h klass.h
member.h membermutable.h ostreamable.h parameter.h
parsable.h pointer.h pointermap.h pointers.h
preprocessor.h recognizable.h referencing.h restricted.h
source.h storageclass.h streamable.h stringable.h
symbolclass.h type.h typequalifier.h types.h
virtuality.h visibility.h volatility.h

Everything is spelled much the same as its meaning in the specification
except the klcass prototype for class prototype in the keyword override
usage in the language, where "class" is a reserved word in the language.
    There are container types in the composition types as so:

#ifndef h_pointermap_h
#define h_pointermap_h h_pointermap_h

#include <map>
#include "pointers.h"

template <typename object_ptr_type> class pointermap{

          std::map<object_ptr_type, xptr<object_ptr_type> > m_map;

          bool contains(const object_ptr_type& key);
          xptr<object_ptr_type>* get(const object_ptr_type& key);
          void put(object_ptr_type* key, xptr<object_ptr_type> value);

#endif /* h_pointermap_h */

So, now I wonder in code translation systems how to implement the short
code range opportunistic parsers.

I'm working on a clipboard tool that I hope to use the compiler to break
code for, with the alignment of the copy blocks.

Yet, what use is my C++ language compiler? I should see if it would
compile some test functions at the very least before using the
specification input to verify C++ language compiler support across
compiler-facility code alignment.

I should see it running on C++ code, then there would be manageable

Let's see besides generating the semantic tree, where there is to be
preservation of all the comments in context and then reorganization with
massive refactoring and so on after analyzing the load tree, with source
code search and replace.

Then, instead of compiler warnings first, tossing the program at the
compiler, it could be that short code range blocks are continuously
scanned towards static vector code path alignment. In that way, the
programmer can guide the selection of the program path through the
specification connects, eg in implementing the template algorithms.

Then, in the source code generation systems, the data structures are to
be streamable, in a way that is not a babel of XML.

I look at the C++ specifications with the considerations of the very
useful web pages with the C++ specifications that I was reading.


Oh, very nice. I implemented my C++ compiler code so it can parse this
entire document's C++, yet, it is not actually implemented. Consider
Larch. I use VC and gcc, not for example the MetroWerk's assembler nor
the Z80.

I wonder about the code block generation with the banks and bank
addressing and so on.


Ross F.

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.