I was trying a python http communication tutorial - https://aiohttp-demos.readthedocs.io/en/latest/tutorial.html#views - when I had to disable the NI Application Web Server to proceed. And then I thought, what the **** am I doing? Maybe I should take the free (well, prepaid) gift of a working web server.
Here's my task. A central HQ computer will have a GUI that monitors five machine stations, each of which has its own computer. Every approx 10 ms (negotiable), each station gives a report consisting of two arrays, the larger being 2048 data points, the other much smaller. Whenever HQ feels like it, HQ can tell a station to start or stop (its computer stays on). A local IP connection is used, with a router at each end. There is also a Raspberry Pi with its own IP address at each station's router, that can send camera frames to HQ. The station-computers use Python and C++ to do their work, not counting whatever needs to be added to communicate with HQ.
Your advice please? Should I use Labview? On both ends or just the HQ? And which if any of these helpful add-ons suggested by Hooovahh should I use?
I have to convert a dynamically generated array into a JSON string and back. Unfortunately I found that the un-flatten method loses the variant data. See the screenshot of FP and BD and the comments inside.
Is this a bug in JSON Text or is my data-construction not supported as expected? In case of the letter I have modify huge parts of my code. So I hope that it is a bug 😉
The 2nd thing I recognized is that the name "Value" of the cluster is not used during flatten. Instead the name of the connected constant / control / line is used. I found the green VI ("Set Data Name__ogtk.vi") at OpenG Toolkit that allows me to programmatically set the variant data name. As you can imagine I would prefer not to need the OpenG VI.
Thanks in advance for your kind help 🙂
TDF team is proud to propose for free download the scikit-learn library adapted for LabVIEW in open source.
LabVIEW developer can now use our library for free as simple and efficient tools for predictive data analysis, accessible to everybody, and reusable in various contexts.
It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy from the famous scikit-learn Python library.
Coming soon, our team is working on the « HAIBAL Project », deep learning library written in native LabVIEW, full compatible CUDA and NI FPGA.
But why deprive ourselves of the power of ALL the FPGA boards ? No reason, that's why we are working on our own compilator to make HAIBAL full compatible with all Xilinx and Intel Altera FPGA boards.
HAIBAL will propose more than 100 different layers, 22 initialisators, 15 activation type, 7 optimizors, 17 looses.
As we like AI Facebook and Google products, we will of course make HAIBAL natively full compatible with PyTorch and Keras.
Sources are available now on our GitHub for free : https://www.technologies-france.com/?page_id=487
I'm a beginner in labview, and now test cRIO about two weeks. I still can not solve the problem. I attach my test project for explanation.
I want to realize that , for example, with time sequence t1, t2, t3, t4, DO outputs T, F, T, F, AO1 outputs A1, A2, A3, A4, AO2 outputs B1, B2, B3, B4, and the delay of AO1 and AO2 should as small as possible(AO1 and AO2 may comes from difference modules).
I search in Google, NI forum, and decide to use for loop and loop timer in FPGA.
The reason as follow:
1. To realize the specific time interval, I can use Wait and Loop timer. But in "FPGA 0--Test DO.vi", it can't not realize specific time interval by several us's error(maybe large). And to complete once of while loop, it needs 134us. I can't explain that it can realize time interval below 134us, even I acturally realize a delay of 10us, but the input is not acturally 10us, so it's not accurate.
And by NI example, I use the Loop timer.
2. In "FPGA 1--Test DO and AO.vi", I find that the loop timer helps me to realize accurate time interval, however, it ignore the first time interval. Such as, t1, t2, t3, t4, with disired output A1, A2, A3, A4. It goes A1(t2), A2(t3), A3(t4), A4(t1). And in "FPGA 2--Test DO and AO.vi", it has same problem. DO0 and AO1 goes A1(t2), A2(t3), A3(t4), A4(t1). And AO0 is always ahead of DO of t1.
The people of NI forum advice that I should put AO0 and AO1 into one FPGA/IO node and use SCTL. But up to now, I don't find any example about it(Google or NI forum, maybe it's primary). Mainly that AO0 and AO1 must go with different timeline, the dimension of input array is different. Can anyone offer advice for me?