gb119 Posted February 24, 2019 Report Share Posted February 24, 2019 (edited) For a while I've been tinkering with the idea of building a LabVIEW client that could to talk to Jupyter kernels for interfacing with Python having been previously a user of RolfK's OpenG LabPython package. Although this, and now the native LabVIEW 2018 Python support have many uses (and indeed I use them in my 'production' code), there were a few things that a Jupyter kernel client can do: Not be tied to particular versions of Python - LabPython got stuck for me around 2.7.10 and I think was fussy about which compiler have been used. The 2018 native support is restricted to 2.7 or 3.6 I believe (3.7 defintiely doesn't work) Not being tied to the same 32/64 bits of LabVIEW Being able to offload the Python to a remote server, or go cross platform I haven't investigated the Enthought package (too much hassle to get a new vendor set up on my University's purchasing system and not really able to justify spending tax payer's money on playing!) which I suspect might be doing something similar. Anyway, the attached zip file is a proof of concept - it includes a test vi that will try to find an ipython executable and fire it up and you can then interact with it. There's lots of things not properly tested and probably a slew of bugs as well. To run it you need several dependencies: OpenG Toolkit libraries, particularly the LabVIEW Data, string, error and array libraries The JKI JSON library - I had to pick a JSON serialiser and the JKI one seemed as good as any and better than some... The JSONText JSON serialiser library available via VIPM The Zero-MQ Labview bindings - libzmq is the underlying network transport used in Jupyter and there is an excellent LabVIEW bindings library for it. The attached SHA256 implementation so that the communications messages are properly HMAC signed. LabVIEW 2018 - sorry I'm only writing in 2018 now and this code uses malleable vi's with type specialization and asserts in use - so it may not be easy to backport There's a few things that I'd still like to figure out - primarily the client protocol is very much focussed (reasonably enough) around the idea that the client is sending strings and is interested in string representations of data.I'd like to figure out an efficient way to transfer largish LabVIEW data structures backwards and forwards. I think this probably means developing a custom message handler and registering it with the kernel when the code starts and writing some Python 'flatten to string' and 'unflatten from string' code - but that's only this week's concept.... If you use it, please note that this probably only alpha quality at best - it may or may not work for you, it may not be safe to use, If it causes any loss or damage or eats your cat then it's not my fault.... Edit 6th March 2019: I've switched the JSON parser to JSONText, found and fixed a few bugs, managed to build a VI package for it that should have the correct dependencies and installs the example client in the LabVIEW example finder. Edit 19th April 2019: Added more options to connect the example client to remote and already running kernels (and not to shut them down on exit!). Some other fixes as well. Edit 11th Arpil 2020: Updated the SHA256 version to one that can correctly hash files without reading the whole thing into memory. university_of_leeds_lib_jupyter_client-1.1.0.6.vip university_of_leeds_lib_sha256-1.1.2.7.vip Edited April 11, 2020 by gb119 2 Quote Link to comment
drjdpowell Posted February 25, 2019 Report Share Posted February 25, 2019 Why not just use JSON as your message format? Python will have JSON libraries. Both JKI JSON and the OpenG Variant Tools are very slow, BTW. Try JSONtext, which I developed partly just to get reasonable performance. Quote Link to comment
gb119 Posted February 25, 2019 Author Report Share Posted February 25, 2019 So for transferring largish chunks of floating point data (e.g. image data or similar) JSON is quite unwieldy (and I worry about losing precision when round tripping to/from ascii data). I haven't looked at it in enough detail, but it seemed to me that there was a good chance that the numpy representation of a double float and LabVIEW's might be sufficiently close that one could do a fairly efficient pack/unpack operation. I take the point about the speed of the JSON serialisers - the choice was more influenced at this point by what I was used to workign with than speed. I'm looking at replacing it with yours and ijn the process keeping more of the message as raw JSON strings rather than storing in arbitary lusters in variants.... Quote Link to comment
drjdpowell Posted February 25, 2019 Report Share Posted February 25, 2019 You could consider a JSON header, followed by binary data. I strongly suspect the float representation in both languages is identical: IEEE standard. Quote Link to comment
gb119 Posted February 25, 2019 Author Report Share Posted February 25, 2019 That was the direction I was thinking in - when I've finished unbreaking the effects of changing JSON parser 🙂 Quote Link to comment
gb119 Posted March 6, 2019 Author Report Share Posted March 6, 2019 The original article has been updated with a new version of the JuPyter Client build as a VI Package. This version has switched the parser to JSONText and removed the OpenG dependencies, and fixed up a bunch of bugs. The example client application gets installed into the example finder. It's still far from production ready code... Quote Link to comment
X___ Posted April 17, 2019 Report Share Posted April 17, 2019 (edited) @gb119 Outstanding work! Just getting started trying it, so bear with the basic comments: - The automated kernel path search on Windows fails to find ipython.exe located in C:\ProgramData\Anaconda3\Scripts\ipython.exe Since I have turned off "Enable automatic error handling dialogs" in Options, I couldn't figure out the problem until I dug down into the diagram (luckily that was trivial due to the nice code layout). Maybe a separate tab with an error indicator could help at that early stage? - Right now each LabVIEW code instance starts a new kernel and stops it upon quitting. This departs from the broader use case mentioned in your intro (remote kernel) and I am not sure whether that is because of the early nature of the project or because that would require a different design (clearly it would, but would it be fundamentally different?). In particular, the seduction for me would be to be able to interact with an existing Jupyter Notebook (having its own kernel already running) or better yet, spawn a Jupyter Notebook. Is that something you have in mind for the future? I'll go back to testing. Keep up this great stuff! Edited April 18, 2019 by X___ calling out OP Quote Link to comment
gb119 Posted April 19, 2019 Author Report Share Posted April 19, 2019 On 4/17/2019 at 2:17 AM, X___ said: @gb119 Outstanding work! Just getting started trying it, so bear with the basic comments: - The automated kernel path search on Windows fails to find ipython.exe located in C:\ProgramData\Anaconda3\Scripts\ipython.exe Hmm, there was a problem I had there but I thought the version I packaged had fixed it. My current development version should find that path - but it depends quite a lot if you have multiple Pythons installed on your machine. BAsically there doesn't seem to be a bullet proof way of getting the correct path in Windows.... On 4/17/2019 at 2:17 AM, X___ said: Since I have turned off "Enable automatic error handling dialogs" in Options, I couldn't figure out the problem until I dug down into the diagram (luckily that was trivial due to the nice code layout). Maybe a separate tab with an error indicator could help at that early stage? That's a sensible idea - it's going into the development code. On 4/17/2019 at 2:17 AM, X___ said: - Right now each LabVIEW code instance starts a new kernel and stops it upon quitting. This departs from the broader use case mentioned in your intro (remote kernel) and I am not sure whether that is because of the early nature of the project or because that would require a different design (clearly it would, but would it be fundamentally different?). In particular, the seduction for me would be to be able to interact with an existing Jupyter Notebook (having its own kernel already running) or better yet, spawn a Jupyter Notebook. Is that something you have in mind for the future? That's largely a result of the test client being mainly aimed at debugging the protocol and for testing message handling rather before moving on to code to more tightly integrate LabVIEW programs with the remote kernel. That said, I'm in the process of adapting the client to allow different methods of locating and connection to the kernel and that will include suppressing the kernel shutdown message on exit. I'm also (very slowly) working on an implentation of a LabVIEW universal-binary-json serialiser/deserialiser with a view to creating some custom ipython messages for transferring binary data efficiently between LabVIEW and Python. The idea is that the LabVIEW client would create message handlers at the Python end that would allow LabVIEW data to be pushed directly into the Python namespace or to request python data to be sent back to LabVIEW. Don't hold your breath though, the day job comes first... 1 Quote Link to comment
gb119 Posted April 19, 2019 Author Report Share Posted April 19, 2019 Ok, bit of Easter holiday coding today. Version 1.1.0 should allow connections to remote and already running kernels (well it does for me), and will only issue kernel shutdown messages if it started the kernel itself. To connect to a remote kernel, you can either manually fill in a cluster of port numbers etc, or simply paste the json from the connection file or (if you have an existing front end to the kernel) do: from ipykernel.connect import get_connection_info print(get_connection_info()) If starting kernels on another machine, remember to tell them to bind to an IP address that isn't localhost e.g. ipython kernel --ip=u.x.y.z and make sure the firewall will let the ports through. 1 Quote Link to comment
X___ Posted April 19, 2019 Report Share Posted April 19, 2019 Let us know when it is posted on VIPM's package list. Quote Link to comment
gb119 Posted April 19, 2019 Author Report Share Posted April 19, 2019 (edited) 2 hours ago, X___ said: Let us know when it is posted on VIPM's package list. That may be a while - but the package file itself in the top of the thread.... Edit: Thinking about, because I'm dependent on the ZMQ bindings which are not available on the NI Tools network, I'm not sure I can put this package (and the SHA-256) library on the NI Tools network either - so it will always need to be installed from manually downloaded vipm files. Edited April 19, 2019 by gb119 1 Quote Link to comment
X___ Posted April 21, 2019 Report Share Posted April 21, 2019 (edited) On 4/19/2019 at 10:26 AM, gb119 said: Ok, bit of Easter holiday coding today. Version 1.1.0 should allow connections to remote and already running kernels (well it does for me), and will only issue kernel shutdown messages if it started the kernel itself. To connect to a remote kernel, you can either manually fill in a cluster of port numbers etc, or simply paste the json from the connection file or (if you have an existing front end to the kernel) do: from ipykernel.connect import get_connection_info print(get_connection_info()) If starting kernels on another machine, remember to tell them to bind to an IP address that isn't localhost e.g. ipython kernel --ip=u.x.y.z and make sure the firewall will let the ports through. OK so the connection to a Jupyter notebook works, great. I can define a variable in the notebook and read its value in LV. Now how do I send something to the notebook from LV? 🙂 Edited April 21, 2019 by X___ Quote Link to comment
gb119 Posted April 22, 2019 Author Report Share Posted April 22, 2019 11 hours ago, X___ said: OK so the connection to a Jupyter notebook works, great. I can define a variable in the notebook and read its value in LV. Now how do I send something to the notebook from LV? 🙂 So this is where it gets trickier. Intrinsically the kernel-client protocol is geared around sending strings to the kernel and by and large getting strings back. This makes sense if one thinks the client is essentially a terminal with a keyboard, a screen and a human. So trivially you can express the data to be sent to Python from LV as an assignment statement "x=3.141592654" and have that executed on the kernel and it will create variables in the kernel's namespace - but it's hardly efficient if what you want to do is send a moderate sized 2D array of floating point numbers over. I think the solution is to implement a pair of Comm[1] objects to send and receive custom messages[2] in which data can be encoded in a more efficient way and used to manipulate the globals() on the Python side and request specific variables to be sent back to Python. I've had a look around at various 'binary json' serialises and liked the look of ubjson[3] the most (ideally I'd implement the Python pickle algorithm in LabVIEW, but that didn't look fun!). That would solve the how to encode the data for transfer part of the problem. The actual mechanics I think will involve a CommunicationsChannell LabVIEW class that has methods to squirt the python needed to create the Comm objects at the kernel end, deal with the kernel requesting to open it's side of the communications back to the LabVIEW and then methods to send arbitary LabVIEW data and request Python data and map it back to LabVIEW types. Finally it will need to implement comm-open, comm-close, and comm message types - but they're basically easy given the class hierarchy that already exists for messages. All of which is great, but I'm running out of Easter vacation, about to hit the summer exams and I'm the departmental exams officer so have negative free time for a few months! [1] https://jupyter-notebook.readthedocs.io/en/stable/comms.html [2] https://jupyter-client.readthedocs.io/en/latest/messaging.html#custom-messages [3] http://ubjson.org/ Quote Link to comment
X___ Posted April 22, 2019 Report Share Posted April 22, 2019 Sending moral support your way... Quote Link to comment
X___ Posted April 24, 2019 Report Share Posted April 24, 2019 I thought it might be helpful if I clarified my use case. Most of my development is LV-based, but I can hardly find anyone interested in working with that code (proprietary language, expensive, graphical, etc.) therefore, in order to be able to share and make what I do expandable by others, I need to interface it with something that is exactly the opposite. Jupyter notebooks are that thing today. What I am looking for is: 1) a way to send markdown text and graphing instructions to reproduce plots generated in LV (or plots that can't easily be produced in LV) in the Jupyter Notebook. The goal is to replace a custom-designed Notebook I based on a .NET rich text box control, which works fine, but is not interactive. 2) a way to pass data structures generated in LV (which the user will have extensive documentation about) to the Jupyter Notebook so that the user of the Notebook can do some processing on their own. 3) a way to send instructions (think custom scripting language) and data to LV. Point 3 is to some extent covered by your tool, as long as LV polls the kernel (and knows which variable to look for). Whether or not a better communication protocol (with user events?) can be designed is up. Points 1 and 2 are pretty much the same thing, the only difference being that 2) might involve bi-directional communication (LV sends data that is processed in the Notebook, which then sends back the result (see Point 3). The original use case, however, is merely to provide data for the user to do whatever they want with. A big unknown to me is whether it would make sense to have access to the Notebook structure within LV (cells, history, data). Quote Link to comment
gb119 Posted April 25, 2019 Author Report Share Posted April 25, 2019 So what you want to do is not so straightforward. The thing is that the Jupyer Client code is not interacting with the notebook directly. When you start the Jupyter notebook you are both starting the frontend that runs in a web-browser, but also starting a kernel process in the background that the front end interacts with. What my code does is provide another front end that can talk to the same kernel backend process - so both front ends can change the state of the kernel (i.e. create variables etc) and interrogate the kernel about its state - so in that way the two frontends are aware that 'somebody else' is messing with 'their' kernel. But, the two frontends don't talk to each other - so LabVIEW cannot directly manipulate what the notebook displays. The only exception that I've found with a little minimal playing is that if you invoke the %matplotlib notebook magic in the notebook and then create a new blank figure in the notebook, then you can plot into tat figure from the LabVIEW end i.e. In the notebook I do: from matplotlib.pylab import * x=linspace(-pi,pi,361) y=sin(x) %matplotlib notebook figure() and in the LabVIEW client I do: plot(x,y,'b+') Then I get a plot of the sin function in the notebook - but I think that is more or less an accidental consequence of how the matplotlib notebook backend works. I started wondering what you need is a LabVIEW JuPyter kernel - ie. something that could interact with a a LabVIEW process via a notebook - that would be a fun project, but just not this one. Thinking further, however, I think your use case is actually mix of both kernel and client - use case 3) is pretty much a LabVIEW Jupyter kernel, use case 2) seems to require that the notebook also has access to a Python kernel and is more like having a Python module that knows how to speak to a LabVIEW kernel. Use case 1) is a bit of a mixture.... What this code is best suited for is off-loading processing of LabVIEW data into Python routines, or allowing a LabVIEW front end to interact with a Python based system - e.g. some sort of distributed control system with a Python kernel sitting in it. Quote Link to comment
X___ Posted April 25, 2019 Report Share Posted April 25, 2019 I believe a LabVIEW Jupyter kernel is out of the question without NI involvement. And it wouldn't address my use case, as I would want users to work with data in python. Moreover, I would for instance lose access to all the parallel job tools that I have access on my local cluster supporting Jupyter notebooks 🙂 Quote Link to comment
Sachin Mohan Posted October 9, 2020 Report Share Posted October 9, 2020 I have a deep neural network code in jupyter notebook with a virtual anaconda environment using keras and tensorflow platforms. The program takes a large 1D array(say 10^6 samples) as input and outputs a modified data of the same size. Now, I have to pass the data from a labview project that i have done to this program and the output from this program is again fed into labview which will be transmitted. How can i interface the jupyter code and labview? Is it possible? Pleasae do respond. Quote Link to comment
gb119 Posted October 11, 2020 Author Report Share Posted October 11, 2020 On 10/9/2020 at 10:40 PM, Sachin Mohan said: I have a deep neural network code in jupyter notebook with a virtual anaconda environment using keras and tensorflow platforms. The program takes a large 1D array(say 10^6 samples) as input and outputs a modified data of the same size. Now, I have to pass the data from a labview project that i have done to this program and the output from this program is again fed into labview which will be transmitted. How can i interface the jupyter code and labview? Is it possible? Pleasae do respond. So the Jupyter client protocol is probably not the way to pass large chunks of data between Python and LabVIEW - it's intrinsically a text based messaging system (since it's really designed for interacting at a console like device with the Python kernel). If you Python kernel is running on the same machine as the LabVIEW code, then probably the simplest way to transfer the data back and forth is to write it to a file. Since even a fast SSD is not particularly fast, you might want to setup a RAM disk for this purpose. You would then use something like this JuPyter client to import some module of your code and then call some function, passing the name of the input file. You make that function return the name of the output file and your LabVIEW code reads it back in. On unix-like systems you might be able to get away with a named pipe for the transfer mechanism (I've no idea if one can do that on Windows or not...). If the Python Kernel is remote to the LabVIEW then clearly you're going to need send it over the network. It's quite easy to write a Python TP/IP server, so I guess you could trigger a function with the JuPyter client that will launch a server. You'd need to manage the process of serializing the data, but if you don't use the JuPyter messaging protocol you can at least send it in a fairly compact binary format. I sort have an idea that it should be possible to use something like the Msgpack binary serializers to package LabVIEW data types and transfer them and write a deserializer in Python at the other end that unpacks them - have the feeling that the numpy in-memory sructure and LabVIEW's flattened double/single arrays might actually be compatible which would make it much simpler. The JuPyter message protocol sort of has the mechanisms to do this builtin via the concept of a Comm - it just needs time to work on it... Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.