First time here? Check out the FAQ!
0

“We are seeing some heavy querying from a RETS vendor in NEREN: LoginName: gar0608

  • retag add tags

Please see the message that NEREN received from Paragon support about your queries against the RETS Server.

Please acknowledge receipt of this email and that the queries will be optimized.

                            “We are seeing some heavy querying from a RETS vendor in NEREN:

LoginName: gar0608

            I’ve attached a snippit of what they are currently querying. We have two concerns, one is that each listing ID is being requested twice, so it looks like if this is a scheduled task, it may not be set up correctly. We’re seeing the same IP and session information in that last column attached, so both are being executed simultaneously. Also – that a single listing is being requested at a time – we ask here that they batch multiple requests into a query.

            Our second concern is there appears to be ~ 100 requests per minute, which is causing our utilization to increase. We have not disabled the vendor as of yet, but if the amount of querying continues in excess, we will have to look into disabling the account until the vendor is contacted. Please let me know if there are any questions. Also here is our guide if they need help in the construction portion of querying: http://paragonconnect.paragonrels.com/rets/rets-best-practice-guide”

I received the above and I am looking into the potential issue but I need to get responses from someone to let me know the changes are having a positive response.

http://neren.rets.paragonrels.com/ret... gar0608

Thank you in advamce, Sean

Sean Garaux's avatar
Sean Garaux
asked 2018-06-15 09:23:37 -0500
bwolven's avatar
bwolven
updated 2018-06-15 09:25:33 -0500
edit flag offensive 0 remove flag close merge delete

Comments

add a comment see more comments

1 Answer

0

It is looking much better than it was before but it still isn't ideal. The best way to do it would be to do you date query using Offset and Limit functionality and only do one query per class. You would only need to do more than one per class if there are more records than the limit allows. Then you just use Offset and Limit to get the next batch. The current record limit for your profile is 2500 records per search transaction.

For example: Property RE_1 (L_UpdateDate=2018-06-15T07:48:49+) Offset=1 Limit=2500 Select=*

If count > number of records returned. Update offset and search again Offset=calculate (previous offset value + actual number of records returned) Records are returned in key field order.

Doing it this way would be much more efficient than doing manual batching by listing id values and require less search requests.

bwolven's avatar
bwolven
answered 2018-06-15 10:21:21 -0500
edit flag offensive 0 remove flag delete link

Comments

I beleive the query you are refering to is a data query that runs every 15 minutes and is based on the last time stamp that it ran. This is how we run our incrementals through out the day. I am not sure why we would need an offset as there were probably not 2500 changes in a 15 - 20 minute time frame.

Sean Garaux's avatar Sean Garaux (2018-06-15 10:51:12 -0500) edit

You may not need to use it. But if you return the data fields in your incremental, you wouldn't need to do the corresponding batch ones you do after that.

bwolven's avatar bwolven (2018-06-15 10:54:27 -0500) edit

Just to add to this. Currently, I am running a full data download and have made sure that the process is pulling 2000 listings at a time for example. RE_! has just over 18000 listings, our process will pull all ids for the 18,000 and will pull batchs of 2000 at a time. I would be suprised if this is putting undo stress on the rets server, but please confirm as I am testing this currently.

Sean Garaux's avatar Sean Garaux (2018-06-15 10:55:15 -0500) edit

It doesn't seem to be too bad. But I would suggest that if you do full pulls, you try to do it outside of normal business day hours, and stick to incremental pulls during the day. Also I noticed that you are pulling a lot of data for open house searches. Instead of pulling them by the open house key field. You are pulling by listing key which returns OH records for the listing.

bwolven's avatar bwolven (2018-06-15 11:27:36 -0500) edit

It also looks like you are doing a second search attempt with the same query if no records are found on the first one. BF_6 for example I see two queries with: (L_StatusCatID=|1,3),(L_Last_Photo_updt=2018-06-15T16:11:01Z+),(L_PictureCount=0+) right in a row.

bwolven's avatar bwolven (2018-06-15 11:40:03 -0500) edit

can you send me this so I can see it. The way the process is written that should only send that query once to obtain the listing ids with photos that have been modified after that timestamp. please send it to sgaraux@deltagroup.com. Thank you.

Sean Garaux's avatar Sean Garaux (2018-06-15 12:14:43 -0500) edit

Full pulls only happen in the off hours. The only reason I was attempting a full pull was for the testing I was doing.

Sean Garaux's avatar Sean Garaux (2018-06-15 12:15:39 -0500) edit

I also have a process for the photos that will not allow one to run on top of another photo process already running. So if our incremental photo process is currently running and another incremental photo process tries to kick off it will not run.

Sean Garaux's avatar Sean Garaux (2018-06-15 12:17:20 -0500) edit

I see what you mean about the openhouse data. I made modifications to use the OH_UniqueID and also saw where someone here was doing some testing and pulling older openhouse data that we dont need. So I have made adjustments to that and the openhouse pull should look considerably better.

Sean Garaux's avatar Sean Garaux (2018-06-15 12:31:15 -0500) edit

Can you also make sure to call the Logout on the session when you are finished with pulls. It will help keep from cluttering up our connections and free up some system resources. They eventually get closed out after a couple of hours. But it is better to close them when you are finished using them.

bwolven's avatar bwolven (2018-06-15 13:56:22 -0500) edit

I saw the openhouse request come through twice (duplicated) for each class. Here is an example: Resource: OpenHouse Type: RE_1 Request: (OH_StartDate=2018-06-15+) Select: OH_UniqueID But I didn't see any requests pulling the OH data by the ID's that were returned.

bwolven's avatar bwolven (2018-06-15 14:12:04 -0500) edit

let me check it.

Sean Garaux's avatar Sean Garaux (2018-06-15 14:15:14 -0500) edit

Openhouse data should look better.

Sean Garaux's avatar Sean Garaux (2018-06-15 14:48:02 -0500) edit

Openhouse searches look much better now. Did you look into adding Logout?

bwolven's avatar bwolven (2018-06-15 14:50:43 -0500) edit

I will look at that does everything else seem to be within reason now?

Sean Garaux's avatar Sean Garaux (2018-06-15 15:42:27 -0500) edit

Yes it does seem to be looking much better now.

bwolven's avatar bwolven (2018-06-15 15:44:34 -0500) edit

okay thank you. I will check on the logout portion of the code and make sure we have it working so it is not leaving sessions open.

Sean Garaux's avatar Sean Garaux (2018-06-15 15:57:06 -0500) edit

Were you able to check into calling Logout? I'm still not seeing any Logout transactions in the RETS logs.

bwolven's avatar bwolven (2018-06-18 14:37:31 -0500) edit

Still not seeing any Logout calls in your requests. If by chance you were using PHRETS, the Logout function is called Disconnect().

bwolven's avatar bwolven (2018-06-26 11:03:56 -0500) edit

I see the Logout calls now. Thank you.

bwolven's avatar bwolven (2018-06-26 13:36:44 -0500) edit
add a comment see more comments

Your Answer

Login/Signup to Answer