Redis Cache Time Out on Local Machines

I have encounter this problem once in a while, but now I get it all the time, it’s a time to fix.  Here is my error.

Timeout performing GET notification-43fc9640-2487-46d6-b56a-18f5f8a31d81, inst: 1, queue: 2, qu: 0, qs: 2, qc: 0, wr: 0, wq: 0, in: 0, ar: 0, clientName: XX-DELL1, IOCP: (Busy=0,Free=1000,Min=4,Max=1000), WORKER: (Busy=1,Free=4094,Min=4,Max=4095), Local-CPU: 100% (Please take a look at this article for some common client-side issues that can cause timeouts: https://github.com/StackExchange/StackExchange.Redis/tree/master/Docs/Timeouts.md)

I have now learn to follow the error message carefully and precisely.  So I follow the link.

Are you getting network or CPU bound?  Yes, I think so.

It said;

Verify what’s the maximum bandwidth supported on your client and on the server where redis-server is hosted. If there are requests that are getting bound by bandwidth, it will take longer for them to complete and thereby can cause timeouts. Similarly, verify you are not getting CPU bound on client or on the server box which would cause requests to be waiting for CPU time and thereby have timeouts.

OK, my local machine (Dell laptop) isn’t having enough CPU power or Network?

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88_101716_121455_pm

 

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88_101716_121535_pm

 

It looks like I am not having enough bandwidth? Are there commands taking long time to process?

I have about 300 keys, not even close to putting any weight no Redis, so I am crossing this out.

Was there a big request preceding several small request to the Redis that timed out?  “qs” tell how many request were sent from the client to the server.

inst: 1, queue: 2, qu: 0, qs: 2, qc: 0, wr: 0, wq: 0, in: 0, ar: 0

qs is 2, that number 2 doesn’t tell much but I always see 2 aka not increasing, so this 2 can’t be hardly any damage to the service.. so I am crossing this out too.

Are you seeing high number of busyio or busyworker threads in the timeout exception?

Well, let’s see.

IOCP: (Busy=0,Free=1000,Min=4,Max=1000), WORKER: (Busy=1,Free=4094,Min=4,Max=4095), Local-CPU: 78.48%

It said, when the number of Busy threads is great than Min threads, you are likely paying a 500ms delay before network traffic is processed by the application.

I get 1 buys, and 4 min, so I am not getting delay here.

This only happens on local machine, which is a distance way from where Azure sits.  This may be it? The recommendation was that Redis cache sits at same region.

OK, this isn’t solving my problem.  Farther more search. I came to https://gist.github.com/JonCole/db0e90bedeb3fc4823c2.

Different sized client machines have limitations on how much network bandwidth they have available. If the client exceeds the available bandwidth, then data will not be processed on the client side as quickly as the server is sending it. This can lead to timeouts.

And recommended solution was basically saying, try to minimize the use of network bandwidth.  Not exactly what I want to hear, but let’s try this.

Closed most of my Chrome windows, closed Skype, Skype Business anythings that is eating up the network usage.

Um.. 🙁 not solving the problem at all.

So far, my suspicion is that I am calling Redis at bad timing.  I am accessing to Redis cache at time when Shared Parent View Frame, and all other cache accessing works OK.

Conflicts between different versions of the same dependent assembly

Warning Found conflicts between different versions of the same dependent assembly that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.

Everyday I still (and ever will) encounter new error that I have never seen before.  And this is it today.

OK.. picking up keywords from the warning message, Assembly, different versions, conflict problems.  I wonder which which assemblies.

oh well let’s go to NuGet and make sure all things are updated.   I see Jquery and Newtonsoft.Json needing to be updated, and I did.

And fixed.