Add rpcinfo rpc to debug deadlocks
We seem to have deadlock bugs in our RPC system, most likely inherited from ZEC or BTC. Since some Hush RPC's take longer (such as anything with Sietch protections), the deadlocks are more likely to occur. Eventually all RPC slots are used up and no more RPC commands can be sent to hushd. This is why the "plz_stop" feature was implemented, but that is just a workaround to restart the server. We must find and fix the root cause. This rpc will allow us to see when we are getting close to our maximum work queue depth and hopefully help us learn exactly what is happening.
This commit is contained in:
@@ -157,6 +157,11 @@ public:
|
||||
boost::unique_lock<boost::mutex> lock(cs);
|
||||
return queue.size();
|
||||
}
|
||||
size_t MaxDepth()
|
||||
{
|
||||
boost::unique_lock<boost::mutex> lock(cs);
|
||||
return maxDepth;
|
||||
}
|
||||
};
|
||||
|
||||
struct HTTPPathHandler
|
||||
@@ -186,6 +191,16 @@ std::vector<HTTPPathHandler> pathHandlers;
|
||||
//! Bound listening sockets
|
||||
std::vector<evhttp_bound_socket *> boundSockets;
|
||||
|
||||
|
||||
int getWorkQueueDepth()
|
||||
{
|
||||
return workQueue->Depth();
|
||||
}
|
||||
int getWorkQueueMaxDepth()
|
||||
{
|
||||
return workQueue->MaxDepth();
|
||||
}
|
||||
|
||||
/** Check if a network address is allowed to access the HTTP server */
|
||||
static bool ClientAllowed(const CNetAddr& netaddr)
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user