GPT Auto-Documenter Gist

Started by cybye, March 29, 2023, 18:55:08 PM

Previous topic - Next topic

cybye

Hi all,

here is some script to auto-describe a view GPT Gist

like turning this



into this


----- automatically generated at Wed Mar 29 2023 19:31:12 GMT+0200 (CEST) using gpt-35-turbo -----
The Default View is a view in the GPT archimate model. It contains several concepts that are related to each
 other. The User is a business-actor that asks the Question data-object and receives the Answer data-object.
 The Question & Answer business-interaction triggers the Simulated Intelligence business-actor and accesses
several data-objects, including Final Query template, Documents indexed by Vectors, Question, Answer, History
Question Template, and Dialog History. The Formulate Query application-process accesses Final Query template
and triggers Return Result application-process while getting Completion from LLM application-service. Compute
Embedding application-process gets Vector from LLM (embeddings) application-service and triggers Find top k
neareast documents application-process. Receive Question application-process accesses Question data-object and
 triggers Rephrase to standalone Question application-process. Find top k neareast documents application-
process accesses Documents indexed by Vectors data-object and triggers Formulate Query application-process.
Return Result application-process accesses Answer data-object. Rephrase to standalone Question application-
process triggers Compute Embedding application-process while accessing History Question Template data-object
and rephrasing to standalone LLM service while accessing Dialog History data-object which contains both Answer
and Question data-objects. Simulated Intelligence is another business-actor in this view while LLM
(embeddings), LLM are two different types of Application Services.


Please note this is changing the view's documentation property (after "----- aut" until end) .  You will need to provide an endpoint as well as an api key at the bottom of the script.

const result = gpt(GPT_URL, prompt, GPT_KEY)
Please additionally note, this is very (!)

a) dependent on what you are showing in the view
b) dependent on the mood of the LLM in each moment

I find it useful anyways ..

Cheers
c



Jean-Baptiste Sarrodie

Hi,

That's really interesting. I wanted to experiment so use-cases with GPT but needed some examples to understand how to call its API...

Well done !

Regards,

JB
If you value and use Archi, please consider making a donation!
Ask your ArchiMate related questions to the ArchiMate Community's Discussion Board.

Phil Beauvoir

Thanks for sharing! Archi meet GPT... ;-)

Phil
If you value and use Archi, please consider making a donation!
Ask your ArchiMate related questions to the ArchiMate Community's Discussion Board.

cybye

small fix (const in #155 protected from accidential changes)

cybye

#4
Quick update. I was able to run this against GPT-4 with a little bit of prompt tuning as well as ordering of the elements. The output is quite impressive:

----- automatically generated at Thu Apr 13 2023 09:56:13 GMT+0200 (CEST) using gpt-4 -----
The architecture model GPT contains a view called "Default View" in the folder Views/. This view consists of various components that interact with each other to form a system. The main components are business-interactions, business-actors, data-objects, and application-processes.

In this system, there is a business-interaction called "Question & Answer" which triggers a business-actor named "Simulated Intelligence." This interaction also accesses several data-objects such as "Final Query template," "Documents indexed by Vectors," "Question," "Answer," "History Question Template," and "Dialog History."

There are two business-actors involved: the User and the Simulated Intelligence. The User asks a Question (data-object) and receives an Answer (data-object). The Simulated Intelligence is responsible for processing the question and generating an answer.

The application-processes in this system include:

1. Receive Question: This process accesses the data-object Question and triggers another process called Rephrase to standalone Question.
2. Rephrase to standalone Question: This process accesses the data-object Dialog History, History Question Template, rephrases to standalone using LLM application-service, and triggers Compute Embedding.
3. Compute Embedding: This process gets Vector from LLM (embeddings) application-service and triggers Find top k neareast documents.
4. Find top k neareast documents: This process accesses Documents indexed by Vectors data-object and triggers Formulate Query.
5. Formulate Query: This process accesses Final Query template data-object, gets Completion from LLM application-service, and triggers Return Result.
6. Return Result: This process accesses the Answer data-object.

Additionally, there is a Dialog History data-object that contains both Questions and Answers.

In conclusion, this system's purpose is to facilitate a question-and-answer interaction between a user and simulated intelligence using various processes such as rephrasing questions, computing embeddings, finding relevant documents based on vectors, formulating queries based on templates, and returning results in the form of answers.


Maybe it's getting time to start working on the other way around?

ps. this is one of the great advantages using a language like Archimate - it is translatable.

Jean-Baptiste Sarrodie

Hi,

Quote from: cybye on April 14, 2023, 07:22:34 AMps. this is one of the great advantages using a language like Archimate - it is translatable.

I agree, that's really why I think training a language model such as GTP on a set of ArchiMate models and other architecture descriptions would be very helpful in the future. Phil and I did present this idea during an Open Group event in 2017, we then imagined the kind of discussion one could have with a virtual architect to help in day to day work.

Quote from: cybye on April 14, 2023, 07:22:34 AMMaybe it's getting time to start working on the other way around?

Yes, some related ideas and potential use-cases:
- It would be interresting to provide more context to GTP (e.g. the chosen viewpoint and its textual description, the view's name, the model's name and documentation, list of elements which are not shown in the view but there in the model and have relationships with view's elements). I guess the resulting text would be even more impressive.
- Though there are some limitations to the size of the prompt that can be send to GPT, it should be possible to give it a big part of the model so that it generates a good summary of the model itself. This summary could be save somewhere and added to the prompt each time you ask something else, making sure GPT known the overall context of the model the view sits in.
- We might be able to use GPT to find duplicates which have different names by calling it one time per element type with the list of element's name and documentation.
- I guess GPT should be able to generate a simplified view based on a mode detailed one.
- GPT is usually able to quickly provide a good decomposition of most "industry standard" value chains. For example I've asked it to give me a list of the main steps of the "S2P" value chain, and it did recognized the "Source to Pay" and did provide a very good description of it. I'm sure we could use this ability to create a "modelling copilot" (in Archi) with which one would chat, and at some point generate content from answers.

Regards,

JB
If you value and use Archi, please consider making a donation!
Ask your ArchiMate related questions to the ArchiMate Community's Discussion Board.

Phil Beauvoir

#6
Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AMPhil and I did present this idea during an Open Group event in 2017, we then imagined the kind of discussion one could have with a virtual architect to help in day to day work.

It was Amsterdam 2017. The idea and conception was JB's, as was the hand-drawn presentation (note the word "Visionary" in JB's forum bio ;-) )

Here's a summary of the presentation - https://www.archimatetool.com/blog/2017/11/07/open-group-conference-amsterdam-2017/
If you value and use Archi, please consider making a donation!
Ask your ArchiMate related questions to the ArchiMate Community's Discussion Board.

rchevallier

Really impressive!

"Easy" view documentation

cybye

#8
Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AM- It would be interresting to provide more context to GTP (e.g. the chosen viewpoint and its textual description, the view's name, the model's name and documentation, list of elements which are not shown in the view but there in the model and have relationships with view's elements). I guess the resulting text would be even more impressive.
- Though there are some limitations to the size of the prompt that can be send to GPT, it should be possible to give it a big part of the model so that it generates a good summary of the model itself. This summary could be save somewhere and added to the prompt each time you ask something else, making sure GPT known the overall context of the model the view sits in.

I was experimenting with translating the full model and each view into robotic english and then indexing it to make it queryable, as shown in the Q&A view. This is working, but showed to be also limited. I did not try with gpt-4, yet. This approach will be supported by documenting each view, cause this can be indexed a bit easier (conclude over the conclusions).

Anyways, today I think it would be worth a try to turn a standalone (architectural) question into a set (or sequence) of concepts and collect all related concepts from the underlying graph and use this to reason about .. This feels more promising to me than just staying at the 1-dim language level, and staying with our "views".


Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AM- We might be able to use GPT to find duplicates which have different names by calling it one time per element type with the list of element's name and documentation.


:) hope speaks here


Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AM- I guess GPT should be able to generate a simplified view based on a mode detailed one.
- GPT is usually able to quickly provide a good decomposition of most "industry standard" value chains. For example I've asked it to give me a list of the main steps of the "S2P" value chain, and it did recognized the "Source to Pay" and did provide a very good description of it. I'm sure we could use this ability to create a "modelling copilot" (in Archi) with which one would chat, and at some point generate content from answers.

that's a good one. Maybe a simple "turn it into archimate concepts" approach, like "Generate View...". I will check this out, this was actually my starting point.