Wednesday, May 25, 2005

MDDI

Let talk about a new Eclipse Project, the MDDi (Model Driven Development Integration) project.

MDDi project is dedicated to the realization of a platform offering integration facilities needed for applying a Model Driven Development (MDD) approach.
This project will produce an extensible framework and tools, designed to support various modeling languages (UML (Unified Modeling Language) or DSL (Domain-Specific Languages), and model driven methodologies.
The MDDi platform will provide the ability to integrate modeling and related development tools.

Model Driven Development (MDD) stresses the use of models in the software development life cycle and argues automation via both model transformation and code generation techniques.

OMG is promoting a model-driven approach for software development through its Model Driven Architecture (MDA) initiative and its supporting standards, such as UML, MOF, MOF QVT. MDD is the specific application of MDA to software development.

MDDi is connected to other Eclipse projects:nships with other Eclipse projects

  • Eclipse Modeling Framework (EMF) is the reference framework in the Eclipse environment for defining and generating model repositories based on meta-models.

  • Graphical Modeling Framework (GMF) aims at enhancing the editors generated by EMF to support graphical editing.

  • Generative Model Transformer (GMT) consists of a set of tools dedicated to model-driven software development. For a good explanation on GMT, look at the GMT overview.

  • OMELET provides a framework for integrating models and model transformations
    with the support of MDA (Model Driven Architecture) in which model transformations are re-usable resources, and libraries of model transformations support increasingly automated transformation from DSLs (Domain-Specific Languages) via PIMs (Platform Independent Models) and on through PSMs (Platform Specific Models) to target code.

One main initial contributor to MDDi is Modelware with a set of ambitious projects around models in software development.

Monday, May 23, 2005

MOF or EMF, GMF and GEF

For the time being, a very, very interesting proposition exists from the Eclipse consortium around the metamodeling framework.

This project is the GMF (Graphical Modeling Framework) project from Eclipse.


It is a very ambitious project and this GMF project can change totally the modeling tools business.

The foundations are EMF for the meta-modeling (a kind of MOF) and GEF (2-D drawing graphical facilities).


This project is in competition with existing proprietary solutions like for example MetaEdit from MetaCase


We can look on web, some animated discussions between the different approaches:

  • existing tools with a large set of CASES, but with previous meta-technologies (several key actors of existing UML CASE tools vendors like MetaCase)
  • New approach for the future based on GMF (IBM Rational, etc…)


Today we can look the requirements, with an evaluation of existing donations like:

  • Borland (Together UML CASE tool belongs to Borland)
  • IBM (Rational Rose CASE tool belongs to IBM)
  • Others contributors with graphs editors generators with a coupling between GEF and EMF.


GMF will cover a wide scope with the delivery of such materials like an UML2 CASE tool, a BPM/BPEL modeling environment and a CASE around Business rules.


What is the current GMF’s timeframe?

MOF and EMF:

In the battle between MOF and EMF, there are some materials to read and so to appreciate and understand the perspectives for each player.

Materials produced by the Pegamento team are very interesting, with for example the Jane DSL editor generator and some papers.

On this url, we can find a serious study of EMF and about a mapping between MOF and EMF and a paper on a Request language for the meta level:

EMF:
For people interesting by EMF, the first document to read is the EMF overview and if you have more time to read the EMF.Edit overview.

Another interesting material on EMF and GEF is the

Eclipse Development using the Graphical Editing Framework and the Eclipse Modeling Framework IBM redbook

The complete list of EMF document exists on the Eclipse consortium web site.

Another IBM's article is an illustration of GEF: Create an Eclipse-based application using the Graphical Editing Framework.


UML2 :
The UML2 project is an Eclipse Tools sub-project and is an EMF-based implementation of the UML™ 2.0 metamodel for the Eclipse platform.
The objectives of this project are to provide a useable implementation of the metamodel to support the development of modeling tools, a common XMI schema to facilitate interchange of semantic models supported by different tools providers.
We can find a good introduction to UML2 with the Getting started to UML2.

The UML2 specifications are published on the OMG web site.

BPEL and process:
BPEL4WS has a defined XSD which can be imported to EMF.
There is no associated notation provided, while mappings exist from BPMN and BPDM. One of these or a proprietary notation could be mapped to the XSD, possibly combining aspects of WSDL to provide a rich BPEL4WS modeling environment.

Business Rules:
Business rules have no standard notation, although many examples exist. Furthermore, many rule engine models are possible for implementation in EMF.

GMF project organization:

Initial Committers

Interested Parties
The following companies have expressed interest in the project:

Known contributors on GMF?

Position of some existing CASE tools vendors:

From MetaEdit guru of Metacase, we can read other points of view related to the scope of the GMF project:

  • On the Juha-Pekka blog on DSM, you have to read the following blogs:

    • Is UML 2 taking SD further? April 14, 2005

    • MOF as a metametamodel? March 16, 2005

    • Unified Modeling Language vs. Domain-Specific Modeling Languages? March 03, 2005
    • Scott Ambler: OMG not doing a good job May 04, 2005

    • XMI, MOF and MetaEdit+ May 03, 2005

    • Microsoft says using its DSM tools is a "massive amount" of effort March 23, 2005

On the MetaCase web site, there are several materials to read too:

  • One Dr Dobb article, with a comparison between Eclipse and MetaEdit+ for DSM (Domain Specific Modeling).

    In this article written by the MetaCase CTO Steven Kelly, he compares Eclipse EMF & GEF with MetaEdit+ as ways to implement domain-specific methods. One hour's work in MetaEdit+ reproduced Eclipse's own small sample application, which runs to 10,000 lines. At 50 lines of code a day that would be over a year's work, making MetaEdit+ an incredible 2000 times faster.

· In another paper, Steven Kelly did MetaCase feeling on a Comparison of Eclipse EMF/GEFand MetaEdit+ for DSM (Domain Specific Modeling).


I think it is quite evident that a new generic framework like GMF will be more limited, in the first deliveries, than a mature framework like MetaEdit+ where a lot of applications already exist. People have to compare things they are comparable.


MOF:

For people wanted to go more in depth with MOF, they have to read the paper on melo written by Unisys guys and published by IBM web site:
This paper presents the work Unisys has done to create a central enterprise architecture (EA). This work includes defining a standard core EA modeling language supported by the EA repository, and building transformations between tool-specific EA models to the standard core EA language.

Saturday, May 21, 2005

Instance or Class, Model or Metamodel ?

Alix and Marion, 2 instances of Patrick


Friday, May 20, 2005

Policy about blogs

I'm a new blogger (6 days; quite a baby still in this domain), so it will take time to find the good style.





I'm a self-made blogger but, I understood several evident rules:

  • To stay polite

  • Don’t use text or picture from other materials. For text or document, we can use hyperlink references, for sources we want consider as an outside extension of our own blog scope.

  • Respect confidentiality about people (no email address and co)

There are rights and laws in most of countries for displaying pictures.

Several references to policies about blogs or more generally to web information publishing:






Thursday, May 19, 2005

Xerox software environments used by the GraphTalk team

Sorry to disappoint you, GraphTalk was not developed at Xerox PARC, even we had several meetings at Xerox PARC (mainly for some demonstration) and other meetings and demo too in other Xerox sites (Rochester Xerox Labs, etc...).

GraphTalk was an internal project in the french operating company Rank Xerox France.

We had the great luck to meet intelligent managers with openness to accept that a local team built something outside the official organization framework.

Xerox Sales people help us a lot when they started to demonstrate and to sell our successive prototypes as they were a product.
Each commitment took by the Sales people with customers or prospects were for us, new resources or new prioritization to the advantage of the GraphTalk project.

The first GraphTalk prototype was imagined and developed outside the normal working hours and with a very brilliant trainee, Mounir Khlat from the French "Ecole Nationale des Ponts et Chaussées".
It is not the first time I have met, in my career, brilliant developer students or graduates of "the bridge" as we say in France.

I meet Mounir at the "Aulnay sous-bois" IT Rank Xerox France office. He was trainee on a not funny subject: an expert system to help sales and marketing people for the printer’s configuration management.

Several "face to face" discussions about Niam (I was joining Rank Xerox from the Maia project at the BNP Paribas), semantic networks, metamodeling, XAIE capabilities, etc ... convince quickly Mounir to start to work with me.
I had only the difficult mission to explain to the previous Mounir's manager that it was better for the Master of Mounir and for Rank Xerox to work on the software engineering domain and not on a basic expert systems feasible directly with a simple PC/guru engine ...
We started by a train on XAIE and InterLispD given by AI Xerox environment experts from Tecsi software (there were no local Xerox employee with enough knowledge on InterLispD to teach us).
My local boss ordered two Xerox 1108 AI machines and as soon we received then, we started some brain exercises ...

The "Talk" common string between SmallTalk and GraphTalk is the only common point with SmallTalk, this exceptionally talented environment.

Our competitor, for technical choices, inside Xerox was not SmallTalk (even Pierre Cointe was closed to Xerox during this period; Pierre was the SmallTalk 80 expert for Rank Xerox), but Mesa/XDE.

Only experts in Xerox software know about the other great Xerox software development environment, the Mesa/XDE environment.
Mesa was the implementation environment for the Xerox Viewpoint environment (OIS environment) running on Xerox 8010 Star, Xerox 6085 and PC with a special hardware card and some specific chips).

During the GraphTalk period, The Mesa environment was used by Xerox @ Rochester for a project in competition with the french GraphTalk team:

It was the SVP (Structured View Point) project.
I met the SVP team in the Rochester Labs, and we were surprised we have a lot of common functional ideas and a common enthusiasm on our own project.
SVP was closed to the Xerox strategy, with very good ideas on the document workflows, and with specific items for software development documentation in a collaborative approach.
GraphTalk was more Engineering oriented and SVP more document oriented.

Jim Savage, the corporate IT manager (so the big IT manager), took two days with Mounir Khlat and myself in France to look at all prototypes we made and to hear our explanations.
Jim was enthusiastic and he took quickly 3 critical decisions, concerning the future of the GraphTalk project:

  • Creation of an international internal team dedicated to software engineering with me as French member and GraphTalk mentor.
    • As often with this kind of committee, results were weak:
      • SVP and GraphTalk stay in competition
      • The IT of the french operating company will use the GraphTalk/Merise tool and other GraphTalk stuff for Enterprise IT architecture, while the other countries will use the IEW James Martin CASE tool.
  • Decision to give me an exclusive mission on GraphTalk and AI stuff and quickly to stop to work on my initial job description (Architecture of French IT systems).
  • Creation of the DIAGL (Direction Intelligence Articielle et Génie Logiciel), an internal enterprise inside Rank Xerox France dedicated to the GraphTalk project and the sales of the Xerox AI products in French speaking countries.

GraphTalk was developed with XAIE InterLispD and an advanced prototype of GraphTalk was developed by Leopold Wilhelm with CLOS (Common Lisp Object System).


When we worked on the port of GraphTalk to standard operating systems (OS/2 and later on Windows and more later on Unix), we took few days to think if we have to be good Xerox employees and to use the official Mesa/XDE environment instead InterLispD.

Market consideration, not limited to the Xeroxsphere, were the most important: the importance of standard (operating systems) is critical for an IT manager before to buy a product, even he is convinced by the concepts.
We choose IBM OS/2 PM, and not Windows because at this time OS/2 PM was strong and Windows with so may bugs that it was not efficient to develop with Windows 3.0.
I remember a meeting with Gartner's analysts at Stanford: they were sure that IBM will be the winner in the war between IBM and Microsoft, for the competition between OS/2 PM or Windows as PC systems.

When you start with train about product strategy, teachers said always: you have to do a market study and you have to meet software analysts (from Gartner, Metagroup, etc..) with a business plan and after you will be in the good position.
I think that when you start with a product, the first thing you have to do is to built the product, and to listen customers. Customers are more important than anything else.

A famous killer decision from marketing is the case of the SmallTalk team at Objectshare (the new name for ParcPlace), a spin-off of Xerox for SmallTalk activity.
In February of 1997 Richard Dym, marketing VP, put out a press release that stated the company's direction was moving away from Smalltalk towards Java. This became known as the suicide letter.

Take care, I didn't hate for marketing people and I have very good relationship with a lot of marketing people.

What really we used in GraphTalk into the Xerox tool box?

  • XAIE InterLispD, with two important tools
    • Important because we can program this tool for GraphTalk (with Grapher) and LEdit (with SEdit)
      • InterLispD Grapher
        • first iteration on GraphTalk was only a special Grapher application
        • due to limitations, very quickly we replace Grapher by our own component.
      • InterLispD Lisp editor SEdit
        • all iterations on LEdit were special SEdit applications
    • InterLispD text editor TEdit
      • To generate documentation
      • Very early and easily it was possible with a GraphTalk CASE tools to generate document with the diagrams inside the document and with clicking in the diagram in the document, to open the diagram editor. In 1986, it was not current and we could say it was very sexy
    • Rooms
      • Genial product from Ramano Rao: the use of multiple virtual workspaces to reduce space contention in a Window-based Graphical User Interface
      • We used it to organize the CASE tool: one room by Diagram editor, or document editor.
    • Loops (from Daniel Bobrow and Mark Stefik)
      • for the concepts and the paradigms
    • Stem, a simulation environment developed by AIS Limited in UK (David Butler was the manager of this external structure outside Rank Xerox UK) with Loops and InterLispD
      • for the concepts and the paradigms
    • Notecards (from Randall Trigg), a powerful hypertext environment
      • for the concepts and the paradigms

If we look at the PARC story with the delivery of these Xerox software environments, we can estimate we meet at Rank Xerox France a lot of exceptional technologies during the GraphTalk project (1986-1992).

We were in the best software company to do this kind of product development.


1981

At a Chicago tradeshow, Xerox unveils the 8010 STAR Information System. PARC's Alto personal workstation is the foundation for this product. The 8010's features include all of the Alto's capabilities plus multilingual software, the Mesa programming language, and interim file servers. The system allows users to create complex documents by combining computing, text editing and graphics, and to access file servers and printers around the world through simple point-and-click actions, a functionality that has yet to be matched by today's computing systems.


1985

The Xerox 6085 Professional Computer System that runs PC programs and has advanced ViewPoint software document-processing capabilities, is released. The product builds on a foundation of PARC's Alto personal workstation and has features and performance capabilities beyond that of the previously released 8010 STAR Information System.

1985-1986

The Xerox 1185 and 1186 Artificial Intelligence (AI) Workstations, intended for the design, use and delivery of AI software and expert systems, are released. These artificial intelligence machines use the Interlisp-D programming environment and computer techniques developed at PARC to duplicate the human cognitive process of problem solving.

Using the Interlisp-D environment, PARC researchers develop Trillium and Pride expert systems for artificial intelligence programming. Trillium enables the quick simulation of new user interface designs. Pride captures engineers' experiences and "rules of thumb" for designing paper paths using pinch rollers.

Xerox markets Lisp workstations that use the Interlisp-D programming language to support artificial intelligence programming as well as applications utilized within Xerox. Developed as a computing environment for research in cognitive science, Interlisp-D combines ideas for rapid prototyping with explicit knowledge representation. With the Loops object-oriented extensions, it will be used to develop a number of valuable knowledge-based systems for Xerox.

1988

The Smalltalk-80 object-oriented programming language is commercialized through the formation of ParcPlace Systems. First deployed in 1972, Smalltalk was the first object-oriented programming language with an integrated user interface, overlapping windows, integrated documents, and cut & paste editor. The business, formed to market products based on the Smalltalk-80 programming environment and to further develop and support Smalltalk-80 standards, will later become ObjectShare.

For people wanted to read some papers from the PARC, look at the the PARC Blue and White series.

Wednesday, May 18, 2005

Niam tools

Niam (Nijssen Information Analysis Method) is the modeling method designed by G.M. Nijssen in Control Data Corporation in the 80th years.

Several tools supported this modeling approach, at two different layers:

  • Design and development steps
  • Production steps


Let start by the Production layer (because only few items):

  • DBMS supporting the 5th NF formalism of Niam logical model (a set of tables with a constraint integrity checking schema): Control Data provides the EMF DBMS, with the checking in real time of all constraints defined at the conceptual level with Niam in the binary model. We used it at BNP Paribas for the pilot project "Choice of the modeling method" in 1983. At this time, it was very very new.
  • RIDL (Reference and Idea Definition Language) from Robert Meersman (Control Data too). It's a language to define business rules on a Niam model, and to navigate in the Niam model. RIDL was able to work on a EMF database instance

Let continue by the Design layer:

Control Data provided IAST (Information Analysis Specification Tool), with several variations:

    • IAST on mainframe (Cyber series of Control Data):
      • 3270 mode User Interface to capture the binary model.
      • Generation of kilometers of paper documentation and binary model pretty printing (one central node (a NOLOT) by page, with all ideas and bridges related to this node.
      • Generation of the neutral model (i.e. the Niam logical model)
      • Generation with a nice pretty printing of the neutral model (tables with primary and foreign keys and the network of constraints between tables thru rules on attributes).
      • Generation of various DDL SQL depending of the DBMS choice
    • PC-IAST on PC:
      • It was an IAST pre-processor on PC with a Windows GUI to capture the binary schema.

Qint developed a Niam tools on PC, with a RDBMS on PC (at this moment, its was the beginning of Oracle corporation, so there were some competitors on this RDMS market, even on PC) named Qint/SQL.

  • The binary data model editor was with the GUI of this period (capture of each idea or bridge in a separate screen) on MS-DOS. The named of this tool was Tina.
  • Generation of the neutral model (i.e. the Niam logical model)
  • Generation of the DDL SQL for the Qint RDBMS

Other stuff with Control Data Corporation

  • Control Data Corporation (with Frans Van Assche) and the BNP Paribas developed together, in 1983-1984, a Prototype of a Niam tool with Borland Turbo Pascal. This prototype's scope was equivalent to the Tina features.
  • Control Data Corporation and Xerox have a common project (1990-1991) to provide GraphTalk on Control Data Unix workstations (in fact it was Silicon Graphics workstations in OEM).

    Silicon Graphics workstations were wonderful machines.

    We studied with Control Data the compatibility of the Unix and the source code of the Medley InterLispD emulator.
    This study was done by Envos (will become Venue few years after), a spin-off company created by Xerox PARC to externalize the Xerox's AI products (Medley was the name of the emulator). In this new company we found Jill Marci Sybalsky, a very confirmed and talented InterLispD senior developer from Xerox PARC.

    The plan was it would better and faster to port the emulator (a tricky program, but with not so much volume of code) on a new Unix machine (even it was a new flavor of Unix versus SunOS for example), than to wait for the port of GraphTalk on C/C++ on Unix (at this moment we were on going to port GraphTalk on IBM OS/2 and presentation manager and it was enough in difficulties, workload and challenges). With the same price, Control data should have all the Xerox software running with InterLispD.

    The most costly in this project was not development effort, but all the meetings we had with Marketing people from Rank Xerox France, Control Data Corporation and some demo I made at the Control Data headquarters at Minneapolis (I remember very well; I came from France with ten positive Celsius degrees, and I found fifteen negative Celsius degrees with a big storm of snow at the Minneapolis airport).

    The development didn’t start and the project aborted, not for technical or human reasons, but for classical organizational issues (two French operating companies of American international companies (Xerox, Control Data Corporation) would build a business outside the core business of their headquarters ....)


Qint developed several Niam on PC, with a SQL DBMS on PC (At this moment, its was the beginning of Oracle corporation, so there were some competitors on this RDMS market, even on PC).

  • The binary data model editor was with the GUI of this period (capture of each idea or bridge in a separate screen) on MS-DOS. The named of this tool was Tina.
  • Generation of the neutral model (i.e. the Niam logical model)
  • Generation of the DDL SQL for the Qint/SQL RDBMS



GraphTalk stuff

  • GraphTalk/Niam (InterLisp) at Xerox:
    • With the GraphTalk metatool, we developed our highlight CASE tool at the beginning of this project.
    • The product existed on all InterlispD platforms (Xerox of course, but on IBM, Sun and DEC Unix workstations with the InterLispD Medley emulator developed by Fuji-Xerox and some Key developers at Xerox PARC).
    • The product used :
      • 2 metamodels (one for the binary, one for the neutral n-ary models)
      • GKnowledge (an inference engine developed by Leopold Wilhelm with GraphTalk) used as model generator based on production rules.

We demonstrate these tools at the occasion of several seminars, and the reactions were always positive with one huge limitation for the people: the fact to have an AI machine to have this application was often crippling for IT organizations (Gaz de France and France Telecom were however quickly customers of GraphTalk Lisp CASE tools). It was different with universities where the bundle (GraphTalk, all AI stuff from Xerox PARC like Notecards, Loops, XAIE, Clos, etc...) was attractive.

  • GraphTalk/Niam (C/C++) at Parallax Software Technologies:
    • The product existed on all C/C++ GraphTalk platforms (IBM OS/2 with Presentation Manager as GUI interface, Microsoft Windows 3.1 and successors, Unix platforms with Motif as GUI interface on IBM Aix, SunOS from Sun and HP-UX form Hewlett Packard).
    • The product used :
      • 2 metamodels (one for the binary, one for the neutral n-ary models)
      • A C++ program to generate the neutral model and all the DDL stuff depending of the targeted DBMS

  • GraphTalk/Maia (C/C++) at Parallax Software Technologies:
    • The product was existing on all C/C++ GraphTalk platforms and used at the BNP Paribas on OS/2 PM and Windows
    • The product used :
      • the 2 metamodels (one for the binary, one for the neutral n-ary models) of the GraphTalk/Niam (C/C++) with some localization and customization issues.
      • The GraphTalk/Niam (C/C++) program to generate the neutral model and all the DDL stuff depending of the tagetted DBMS
      • The others metamodels used by BNP Paribas in Maia like the Processus model, MCT, MOT, DFD, etc....
    • 2 different generations of GraphTalk:
      • GraphTalk 2.5
      • GraphTalk 3.0 and successors
ORM

In another blob, I will talk about ORM tools.
ORM (Object Role Model) is a modeling method very near from Niam, developed by Dr Terry Halpin.
Object Role Modeling (ORM) is a powerful method for designing and querying database models at the conceptual level, where the application is described in terms easily understood by non-technical users.

Terry and GM Nijssen wrote together the best book on Niam: "Conceptual Schema and Relational Database Design : a fact oriented approach" ;
Prentice Hall 1989 ; ISBN 0 7248 0151 0


Tuesday, May 17, 2005

Communicate is not so easy and hardware is vicious

Even the title "My meta done" of my weblog seems a joke, the materials I wish to discuss are always serious, but not academic.

I've however several real life stories which can show the difference between the plan and the reality.

The bone stuck in my throat

In the year 1989, I was invited by the AFIM (Association française des ingénieurs et responsables de maintenance) Organization to give a conference about GraphTalk in Switzerland for a "Forum International de la maintenance".

There was not really links with GraphTalk, but Sales people at Rank Xerox France want to use one occasion to talk about Xerox in an IT conference. The sales map of the French subsidiary included Switzerland (It is often the case in international high-tech company where Switzerland and Belgium are consider sometimes as French province ....).

Very early in the morning we (my boss, Leopold Wilhelm (a senior developer), myself) took the airplane from Paris to Geneva, with in on our luggage only the hard disk of the Xerox 1186 AI machine.
The Local Xerox subsidiary lent to us the workstation.

We joined the local office at the end of the morning before the lunch (sandwiches).
No stress, the conference was planned at 5 o'clock in the afternoon.

At the beginning of the afternoon, I was unable to pronounce any word with an understandable voice. At this time, I smoked 3 boxes of "Craven A" without filter per day and I'm feeling a bit throaty.

My conference was announced in the seminar program, and it was not imaginable to be outside the game.
We found a special medicine in the chemist's shop, but without success.

At 5 o'clock, we were in the amphitheater, with quite 150 people waiting for the conference.
It was dramatic.
My boss invented a tricky story with the fact Swiss customs officers made some problem with the Xerox AI machine, so we were not ready to produce the original conference, but we had something to present.

Leopold and I were staying at the gallery.
Because words were sticking in my throat, I was reduced to point and click with the mouse on each element of the presentation, talked kind of words in the ear of Léopold with the voice of Marlon Brando in the Godfather, and Léopold repeated them very conscientiously and loudly.

The nightmare's duration was around one hour. It was a complete disaster.
At 6 o'clock, only 25 people were always in the amphitheater.

Sometimes, when I looked at Woody Allen's movies, I remember this glorious day at Geneva...

Don't worry for my health, I don't smoke anymore.


Why I prefer software to hardware

In the year 1990, Xerox invited a lot of people in the wide amphitheater of the IT organization @ Aulnay sous Bois, near Paris and where I hidden to work on GraphTalk ....
It was a quarter frequency based meeting and French Sales people want to show to customers, working in organization domain in large French companies, how the know-how and the technology of Rank Xerox can help them to do their job.

All GraphTalk stuff in this period was running only on AI Xerox machine.
For expert, it was a Xerox 1108 with a look similar to the Xerox Star workstation, or it was a Xerox 1186 which had some common part with the OIS Viewpoint workstation, but the hardware was different: a big hard disk, additional memory (Lisp is very consuming) and specific chips.
So to demonstrate GraphTalk in these years, we need to have athletic developers (at the beginning of the GraphTalk project, only developers were able to demonstrate the product).

The demo conference was planned and announced for 2 o'clock.
So we had time to lunch to internal restaurant and to install the workstation in the amphitheater at 1 o'clock and half.

The amphitheater's doors were opened to the visitors a quarter before 2 o'clock.

We push the button to boot the machine at this moment, but the screen was very quickly and definitively freezed and the machine out of order.

We decided quickly to take a developer's machine, 3 stairs upstairs.

At 2 o'clock, around 120 people were sitting in the amphitheater and we consumed already two workstations. We started to use the third.

Visitors were surprised mainly by three things:

  • First one, to see so many different people manipulating large screens and desktops with a huge tension.
  • Second one, the meeting didn’t' start on time and for a meeting about organization, it was strange to be late and at this period in the Xerox company, there were large poster about they items of quality, the first one was about punctuality.
  • Last one, to see a devastation field of 4 refurbished machines in the gallery

We resolved the issue with a fourth machine at 2 o'clock and half.

The beginning of the demo with a thunderous applause helped us to hide the external tensions and the internal stress.

Developers were disappointed without workstation during the afternoon.

This day, I started to understand it will be difficult to sell technology like Software Engineering products in IT departments, if our products continue to run only on Lisp machine.
Our competitors haven't sexy product like GraphTalk, but their products were running on PCs with GEM and a lot of RAM.
An AI workstation was more aggressive than a simple PC in the IT area.

In this period, people made often the confusion between languages, like Lisp or Prolog, and scientific subjects like AI (Artificial Intelligence) for example. This fuzzy argument was not convincing to facilitate the GraphTalk launch.

Why "My meta done"

  • Google asked me to give a name to the bog ....
  • "My meta done" was the displayed message in the prompt window of the Xerox machine when we compiled any GraphTalk metamodel to obtain the graph editors

Monday, May 16, 2005

Maia and Niam


For this second blog, it's a new subject and a blog dedicated to Maia, or MAIA (Méthode d'AIde à l'Analyse).

Maia is a modeling method designed and developed (with several CASE Tools) at the BNP Paribas, during the 1983-1986 period.

In these 80's year, it was a great battle between several modeling techniques in France, Europe and in North America.

Relational DBMS started to be used really in large IT organizations and the classical analysis methods were inappropriate.

Changes in modeling became mainly from DBMS modeling gurus with basically three approaches for the data modeling part:

  • The first one was to work at the logical level with mainly Bachman's diagram for Codasyl (IDS/2 and co) or hierarchical DBMS (IMS and co), and for some experts mathematical n-ary relational diagrams with a normalization/denormalization process around the 5th NF theory (Codd, Date, Kent, etc..).
  • The second one was to work at the logical/conceptual level with the Entity-Relation ships approach (Peter Chen in the USA, Merise (Hubert Tardieu, Yves Tabourier, Arnold Rochfeld, etc...) in France, IDA (François Bodart) in Belgium, etc..

    Even the boxology seems similar, a Merise MCD ("Modèle Conceptuel des Données") is different of an UML (Unified Modeling Language) Class diagram.
  • The third approach was to work clearly at the conceptual level with the Binary approach of Niam (GM Nijssen) at Control Data Corporation. Niam was inspired by the semantic network and the AI stuff.

We can observe the relativity of the IT marketing.
In this period the slogan, with the beginning of RDBMS market, was:

"The best is to separate totally the data and the process analysis. So the application should be able to be modified easily".

Few years (the frequency of each new technologic wave seems to be 5 years ...) after, the slogan was the opposite with the aficionados of the object approach with an encapsulation of the process (i.e. the methods) inside the objects.

The main approaches for the process modeling part were:

  • The first one was to work with data flow diagram decomposition (Yourdon, De Marco), like in SA, SADT (Douglas T. Ross), etc..
  • The second one was to work with Petri nets like diagram (Merise, IDA). It came from the European IT community.

BNP Paribas finished the project "Project control method" with the choice of SDMS as method infrastructure. The French name was MSP ("Méthode Standard de Projet") and was an adaptation of the US materials, with a lot information inside it.
And a lot of papers to manipulate....
Too much paper in one main reason developers aren't fan from methodology!


I was hired at the BNP Paribas by André Ruff to take the lead of the project "Modeling method".

This blog is the first of a set of blogs dedicated to this Maia project, for the 1983-1986 periods.

It was for me a very important period, because it was the beginning of my conceptual IT activities with Niam/Maia at the BNP Paribas, before GraphTalk at Xerox and Parallax Software Technologies.

I presented several years ago during a French modeling seminar some slides to explain the continuity of this two periods of my professional story, and the mapping we can made between Niam/Maia and GraphTalk. It will be developed later in other blogs



Sunday, May 15, 2005

GraphTalk


This weekend is a particular weekend because I am deciding to write a blog about GraphTalk and more generally speaking about the GraphTalk project.

But in few words, what is GraphTalk ?

GraphTalk is a meta-case technology developed at Xerox during the 1986-1988 periods.

Mounir Khlat and Patrick Jeulin are the two co-authors of this environment, even if a lot of developers were involved in the GraphTalk project at Xerox first and at Parallax Software Technologies later.

The GraphTalk's foundations are:

  • the Niam (Nijssen's Information Analysis Method) modeling method from GM Nijssen developed at Control Data Corporation and the Niam's extensions we made during the Maia project at the BNP Paribas,
  • the work of John Sowa on the conceptual graphs and the mathematical theory about Hypergraphs,
  • the RIDL project from Robert Meersman,
  • the whole conceptual stuff we used to set up the enterprise IT architecture at Rank Xerox France (1986-1988),
  • the stunningly beautiful Xerox Artificial Intelligence Environment (XAIE) environment with particularly 4 specific components:
    • The InterLispD grapher.
    • the Loops paradigms (Daniel G. Bobrow and Mark Stefik)
    • the InterLispD SEDIT Lisp structural editor
    • the Notecards (Randall Trigg) Hypertext environment
  • The mentor specification language metal from Bertrand Melese

The initial GraphTalk implementation environment was InterLispD (the Xerox Lisp) on Xerox 1186 workstation and on Sun workstation with the InterLispD emulator.

Programming with GraphTalk is pure Visual programming, even APIs (with Lisp for the initial versions, with C/C++ for the portable versions) allow developers to implement sophisticated triggers, methods or specific modules.

During 1988-1990, GraphTalk was completely re-designed and re-implemented in C/C++ to be running on OS2/Presentation Manager, Windows, Unix/Motif on Sun, HP and IBM workstations.

GraphTalk has his own proprietary Object manager. Some GraphTalk prototypes were implemented with Object Oriented DBMS like Ontos, Objectstore, Versant, Poet and Matisse. .These GraphTalk versions were only internal prototypes, never used outside the Labs.


The main part of GraphTalk is the GraphTalk developer environment (Meta-tool) with dedicated graphs editors to specify the grammar of the editors. A compiler uses this source code to generate the CASE tool.

A CASE tool is a hypergraph editor; each graph has his own grammar.

GQL (GraphTalk Query Language) allows to access to all data in the hypergraph with a SQL like syntax.

Properties of each instance (hypergraph, graph, node, link) could be multi-valued typed fields (integer, decimal, string, binary file, structured file, etc..), or other instances (graph, nodes).

GQLReport (with a mixed of GQL and SGML/XML syntax) allows to produce Word document instances of GTD (GraphTalk Template Document).

LEdit (Language editor) is a BNF editor used to define the grammar of a language and to generate the parser and the structural editor compliant with the grammar.

LEdit co-authors are Léopold Wilhelm, José Luu and Mounir Khlat. LEdit was used for the procedural parts of the generated CASE tools and as code generator himself (for example the Root CASE tool generated ADA code, the UML CASE tool generated C++ code).

With GraphTalk, a lot of CASE tools were implemented, directly by the Xerox and Parallax Software Technologies team, and by partners too (for example a Fusion CASE tool by SoftCase in UK, or an OSSAD environment by C-Log in Switzerland).

The main developed CASE tools were: SADT, IDEF0, SA, SD, SART, Gane-Sarson, Merise, Merise2, Niam, Maia, Hood, OMT, UML, OOA Shlaer and Mellor, Axial.

Several model generators were developed by Leopold Wilhelm. The generation used GKnowledge (an inference engine developed with GraphTalk) to take as input for example a Niam data model and to generate a 5th NF relational model with the complete integrity rules schema.

Some IT software engineering teams developed their own modeling environments:

Some guru on methodology used GraphTalk to implement their own method:

  • Pierre Der Arslanian for the Root method
  • The BNP Paribas for Maia.
  • Henri Lenormand for the SCOPE (Système de connaissance pour l'entreprise) environment
  • etc..

Some French universities used GraphTalk to develop some research prototypes.

Some European ESPRIT projects used GraphTalk too.

Since 1995, GraphTalk is a product and a trademark of CSC (Computer Sciences Corporation).

So this blog and the set of GraphTalk blogs are and will be dedicated exclusively to the period 1986-1995 when I was in charge of the GraphTalk project.

Gilles Barbedette, senior developer and architect @ Parallax Software Technologies, has become the GraphTalk chief architect at CSC since 1995.