ServiceNow plugin suggestion on relationship parsing

Started by Manj75, September 23, 2019, 14:47:55 PM

Previous topic - Next topic

Manj75

Hi Herve,

In relation to the issues already raised and having tried the latest v1.7 I wanted to put forward a suggestion to improve the working of the import of large data sets.  It is expected that relationships data set will be the largest, even bigger than any of the CI tables and have to retrieve them in a single REST request may well be inefficient and hide problem areas.

I noticed that in v1.7 the URL generated has a query filter inbuilt to fetch relationships that are IN the given lists of type sys_ids.  My suggesstion is to instead of a single URL request for all relationship types, can you instead iterate the given types in the ini file and generate and send a URL REST request per type, so the JSON response is only in context to that type parsing and adding to the Archi model.  I see this as being much more efficient as the JSON response will be smaller and if there is a particular failure due to a specific relationship type or size of response then it is clearly seen in the log file as progress will have been shown up to that types' request/import.

Hopefully, you can follow what I mean here, but here is some pseudo description:

for N iterator through relationship types specified in ini file
{
    Generate URL request for N type
    MyConnection send the request and receive JSON response
    Parse the resopnse as usual and create Archi elements
    Move on to next type in list
}

Hervé

Hi,

That should be quick to implement. This will lead to smaller JSON responses and hopefully fix your issue. Could you please open an issue on GitHub and I will do it.

Best regards
Hervé

Manj75

#2
I've raised an issue on GitHub and this feature will be really beneficial and look forward to its release, but unfortunately won't resolve my main issue because it turns out that a single relationship type and its volumes is causing the exception (please see my other thread update to keep it in the same thread).

Thanks,
Manjit