Archi Forum

Archi => General Archi Discussion => Topic started by: brain_work on February 28, 2017, 08:37:38 AM

Title: Loading a huge model versus creating a new model and import csv files
Post by: brain_work on February 28, 2017, 08:37:38 AM
Hi,

I noticed that there is a great difference in loading times between loading a large existing model ( an .archimate file) versus creating a new model and import the csv files with the same data. Loading the .archimate file takes 15 minutes and loading the csv files into an empty model takes about half the time (slightly more than 7 minutes).

The model I use to test has:

So .csv files seems more appropriate when you want to share just the elements and relationships of large models.

BTW I'm curious how much time the new v2 database plugin will take to load the same data from a database.

Regards,
Peter

Title: Re: Loading a huge model versus creating a new model and import csv files
Post by: Hervé on February 28, 2017, 09:35:08 AM
Hi Peter,

I just did some tests on an empty SQLite database on a i5-3320M laptop (2 HT cores @ 2.6 GHz) with 3 GB RAM :

my plugin is much quicker but requires a lot more RAM. I believe I also have got some memory leak because the heap memory is not released after my import process. Still a lot of work to achieve  ;)

Best regards
Hervé
Title: Re: Loading a huge model versus creating a new model and import csv files
Post by: Phil Beauvoir on February 28, 2017, 12:43:03 PM
Hi,

Archi was never designed for huge models. The slow loading and memory bottleneck is caused by the Eclipse Modelling Framework (EMF) loading the whole XML file into memory. In order to truly support faster, "lazy", loading the whole persistence mechanism needs to be re-worked to use "proxy" objects and, basically, to not load the whole lot into memory. Also, the model tree would need to be re-done to support lazy loading. So until, this is re-worked these larger models will take some time to load.

Phil
Title: Re: Loading a huge model versus creating a new model and import csv files
Post by: Hervé on February 28, 2017, 12:54:53 PM
Hi Phil,

Nevertheless, I do not know if the whole XML file is loaded in memory but when I use a profiler to compare the native file load and my import plugin, the native file load requires much less memory than my plugin.

Peter's model (at least the extrat that he kindly shared with me) requires only 128 MB of heap size when loaded with the standard load file procedure, while it requires 512 MB of heap size when imported with my plugin.

This shows that I have got a huge area of improvement  ;D

Best regards
Hervé
Title: Re: Loading a huge model versus creating a new model and import csv files
Post by: brain_work on February 28, 2017, 14:52:35 PM
I don't think I ever shared an extract, that was another Peter who also worked with a large model (even larger than mine I think :-) )

Archi behaves very good with a large model with the exception of the search feature which starts searching to soon (after you type a letter). The visualiser works very good on my machine if I keep the Depth at 3 or lower. Specialised graph visualisation tools like Gephi, Cytoscape or NodeXL perform much much worse.And I do everyting on a laptop with 2.5Gb available memory and a i5-6300U processor @ 2.40Ghz.

Importing my model with the DB plugin took between the 10 and 11 minutes at one time and between 5 and 6 minutes at another try. I think it has to do with the memory management because at the faster try Archi used less memory.

Regards,
Peter
Title: Re: Loading a huge model versus creating a new model and import csv files
Post by: Hervé on February 28, 2017, 19:25:00 PM
Indeed, I was talking about another Peter. I should have use his pseudo instead : prgee.
Title: Re: Loading a huge model versus creating a new model and import csv files
Post by: Johan2 on March 21, 2017, 12:55:58 PM
Great thread;

Maybe new search functionality would only start searching at third character ... that would not bother typing in search anymore.