You're definitely trying to abuse a feature of the TCP communication here in order to fit square pegs into a round hole. Your requirements make little sense.
1) You don't care about loosing data from the sender (not sending it is also loosing it) but you insist on using a reliable transport protocol (TCP/IP).
2) The client should control what the server does, but it does not do so by explicitly telling the server, but instead you rely on the buffer full message at the client side to propagate back to the server, hoping that that will work.
For 1), the use of UDP is definitely useful. For 2), the buffering in TCP/IP is not meant nor reliable for this purpose. The buffering in TCP/IP is designed to never allow for the possibility that data gets lost on the way without generating an error on at least one side of the connection. It's design is in fact pretty much orthogonal to your requirement to use it as a throttling mechanisme.
While you could set the buffer size to sort of make it behave the way you want, by only allowing a buffer for one message on both the client and server side, this is a pretty bad idea in general. First, you still would have to send at least two buffers, with one being stored on the client socket driver and the other in the server socket driver. Only allocating half the message as buffer size to only have one full message stored, would likely not work at all and generally generate errors all the time.
But it gets worse: any particular socket implementation is not required to honor your request exactly. What it is required to do is to guarantee that a message up to the buffer size can not get corrupted or spuriously lost due to some buffer overflow, but it is absolutely free to reserve a bigger buffer than you specify, for performance reasons for instance, or by always reserving a buffer with a size that is a power of 2 bytes long. Also it requires your client to know in advance what the message length is, limits your protocol to only work in the intended way when every transmission is exactly this size, and believe me, at some time in the future you will go and change that message length on the server side and forget on the client side to make the according correction.
Sit down and think about your intended implementation. It may seem that it would involve more work to implement an explicit client to server message that can tell the server to start sending periodic updates or stop them, (a single command with the interval as parameter would be already enough, an interval of -1 could then mean to stop sending data), but this is a much more reliable and future safe implementation than what you describe.
Jumping through hoops in order to fit square pegs into round holes is never a solution.