Started by cybye, March 29, 2023, 18:55:08 PM
----- automatically generated at Wed Mar 29 2023 19:31:12 GMT+0200 (CEST) using gpt-35-turbo -----The Default View is a view in the GPT archimate model. It contains several concepts that are related to each other. The User is a business-actor that asks the Question data-object and receives the Answer data-object. The Question & Answer business-interaction triggers the Simulated Intelligence business-actor and accesses several data-objects, including Final Query template, Documents indexed by Vectors, Question, Answer, History Question Template, and Dialog History. The Formulate Query application-process accesses Final Query template and triggers Return Result application-process while getting Completion from LLM application-service. Compute Embedding application-process gets Vector from LLM (embeddings) application-service and triggers Find top k neareast documents application-process. Receive Question application-process accesses Question data-object and triggers Rephrase to standalone Question application-process. Find top k neareast documents application-process accesses Documents indexed by Vectors data-object and triggers Formulate Query application-process. Return Result application-process accesses Answer data-object. Rephrase to standalone Question application-process triggers Compute Embedding application-process while accessing History Question Template data-object and rephrasing to standalone LLM service while accessing Dialog History data-object which contains both Answer and Question data-objects. Simulated Intelligence is another business-actor in this view while LLM (embeddings), LLM are two different types of Application Services.
const result = gpt(GPT_URL, prompt, GPT_KEY)
----- automatically generated at Thu Apr 13 2023 09:56:13 GMT+0200 (CEST) using gpt-4 -----The architecture model GPT contains a view called "Default View" in the folder Views/. This view consists of various components that interact with each other to form a system. The main components are business-interactions, business-actors, data-objects, and application-processes.In this system, there is a business-interaction called "Question & Answer" which triggers a business-actor named "Simulated Intelligence." This interaction also accesses several data-objects such as "Final Query template," "Documents indexed by Vectors," "Question," "Answer," "History Question Template," and "Dialog History."There are two business-actors involved: the User and the Simulated Intelligence. The User asks a Question (data-object) and receives an Answer (data-object). The Simulated Intelligence is responsible for processing the question and generating an answer.The application-processes in this system include:1. Receive Question: This process accesses the data-object Question and triggers another process called Rephrase to standalone Question.2. Rephrase to standalone Question: This process accesses the data-object Dialog History, History Question Template, rephrases to standalone using LLM application-service, and triggers Compute Embedding.3. Compute Embedding: This process gets Vector from LLM (embeddings) application-service and triggers Find top k neareast documents.4. Find top k neareast documents: This process accesses Documents indexed by Vectors data-object and triggers Formulate Query.5. Formulate Query: This process accesses Final Query template data-object, gets Completion from LLM application-service, and triggers Return Result.6. Return Result: This process accesses the Answer data-object.Additionally, there is a Dialog History data-object that contains both Questions and Answers.In conclusion, this system's purpose is to facilitate a question-and-answer interaction between a user and simulated intelligence using various processes such as rephrasing questions, computing embeddings, finding relevant documents based on vectors, formulating queries based on templates, and returning results in the form of answers.
Quote from: cybye on April 14, 2023, 07:22:34 AMps. this is one of the great advantages using a language like Archimate - it is translatable.
Quote from: cybye on April 14, 2023, 07:22:34 AMMaybe it's getting time to start working on the other way around?
Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AMPhil and I did present this idea during an Open Group event in 2017, we then imagined the kind of discussion one could have with a virtual architect to help in day to day work.
Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AM- It would be interresting to provide more context to GTP (e.g. the chosen viewpoint and its textual description, the view's name, the model's name and documentation, list of elements which are not shown in the view but there in the model and have relationships with view's elements). I guess the resulting text would be even more impressive.- Though there are some limitations to the size of the prompt that can be send to GPT, it should be possible to give it a big part of the model so that it generates a good summary of the model itself. This summary could be save somewhere and added to the prompt each time you ask something else, making sure GPT known the overall context of the model the view sits in.
Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AM- We might be able to use GPT to find duplicates which have different names by calling it one time per element type with the list of element's name and documentation.
Quote from: Jean-Baptiste Sarrodie on April 14, 2023, 09:57:45 AM- I guess GPT should be able to generate a simplified view based on a mode detailed one.- GPT is usually able to quickly provide a good decomposition of most "industry standard" value chains. For example I've asked it to give me a list of the main steps of the "S2P" value chain, and it did recognized the "Source to Pay" and did provide a very good description of it. I'm sure we could use this ability to create a "modelling copilot" (in Archi) with which one would chat, and at some point generate content from answers.