Jump to content

MarkCG

Members
  • Posts

    147
  • Joined

  • Last visited

  • Days Won

    17

Posts posted by MarkCG

  1. All I/O goes through the FPGA, and you can program a cRIO with both a program running on cRIO real time processor and one on the FPGA (also called a "personality") . You do not however HAVE to explicitly program the FPGA. You can use the "scan engine" if all you want to do is read and write I/O at a rate up to 500 Hz or so. All your program logic can be done in a LabVIEW program running on the real time processor. 

    Where the FPGA comes in is when you need to do things at high speed, or with very high timing precision . For example, you want multiple PID loop controllers that run at 10 kHz, reading an analog input as the process data and outputting the control signal on an analog output or as PWM signal. You can do this on the FPGA-- you wouldn't be that fast otherwise. That or doing things like responding to digital triggers to perform some action within microseconds, things that generally require you to access I/Os on the in microseconds. You can also use the FPGA built in digital signal processors to do things like fourier transforms of data coming in and a whole myriad of other signal processing things that's contained in the Xilinx IP that you can access on some cRIOs.

    You are more limited in the datatypes and functions you can use on the FPGA-- typically you do things in LabVIEW fpga code you would not do in normal LabVIEW.

    On the real-time side of the cRIO, you can do pretty much anything you do on a normal computer with far fewer limitations-- you have access to the internet, serial communication to instruments, all the various math and analysis VIs, pretty much anything except UI related things (unless you have cRIO with a display port)

    The host is generally just the thing you use as a user interface. PC, tablet, phone, what have you. Typically the cRIO has the machine's logic built into its real time program, and the "host" sends commands or makes requests of it to do something via TCP/IP, network streams, or even UDP. The cRIO handles those requests and then lets the host know whether it can do them and whether it did or not. 

     

     

  2. On 11/26/2013 at 2:47 PM, GoGators said:

    Just came across this port.  Sys Eng at NI is interested in zeromq and rabbitmq.  We've been playing around with both and are considered making an open source library if it fits a need.  One of the SEs even built it for the cRIO-9068 platform for alliance day to talk about reuse on NI Linux RT.  Don't count us out yet :)

    Turns out one of my coworkers is trying to compile zeromq for the cRIO-9068, but not having success. Anyone ideas or have the .so file available? We also have a cRIO-9038 which is a different processor architecture, maybe it will work there?

  3. What is the datatype you want to write? If the manual for the device say that a particular register is interpreted as a 16- bit signed integer (I16 ) you will need to use the type-cast function on the I16 data in your LabVIEW program and wire that that to the write single register VI.

    For single precision floating point values, which are 32 bits, the modbus device will typically have you write to two consecutive registers in memory, typically called the high word and low word. It's done this way because every modbus register is 16 bits.  So what you do is this

     

     

    typecast single float.png

     

    and wire the array of U16s to the "write multiple registers" VI. Usually if the high word precedes the low word in memory this will work. If not the you have shuffle the high and low word. Modbus is fairly old, dating back to the 70's and doesn't have built in support for flaoting point.

     

  4. 3 hours ago, drjdpowell said:

    My reasoning may be wrong but I rejected:

    1) NI implementation as it is part of the DSC and requires a runtime license on each unit.

    2) NI Labs free version because it is both unsupported AND password protected (a deadly combination).

    3) Modbus Master by Plasmionique because it doesn’t do Slave/Server (though I’m using it as a guide).

    4) Old NI library because it is very old.   Unsupported for more than a decade (and it was very old back then).  

    Also, it may seem strange to say, but these libraries aren’t solving enough of my problem.  Building and parsing strings is the easy part; Modbus isn’t that complicated.  It’s shoehorning my complex data into a bunch of U16 registers that seems harder.   I think the above solutions maintain internal models of these registers, which I must continually update.   Instead, I plan to do an event/message-based solution where I build the command responses directly from the application data.

    Fair enough. I will say that the old library did the job pretty well though, never had a problem with it, even running on something as old as a fieldpoint. I actually modified to make it faster on reads as well, and also to make communications library for a flowmeter that used a proprietary bastardized version of modbus. The core logic of it is pretty solid and worth looking at. 

  5. Do you guys think that sending the automated error report to NI after a hard crash actually helps them fix things ? I usually click no out of instinct but may be I should start. It's hard to believe how much we put up with LV's hard crashes-- for me a couple a day is no big deal. But otherwise they seem pretty rare in other programs nowadays. Chalk it up to the crustiness of the codebase I guess. 

  6. The DSC toolkit Glyph really are pretty horrible. More 1995 than 2002 if anything. I've been using Edraw max to create graphics. There is a lot of P&ID stuff in there.  No high vacuum but it's pretty easy to create graphics from the raw shapes and apply color gradients to give it that 3D effect. The ability to scale easily because of vector graphics is awesome too. You can set you drawing sizes in pixels directly which make it really easy to design for a given resolution. 

    https://www.edrawsoft.com/pid-examples.php

    I will typically make a P&ID with edraw, export it as a PNG, drop that directly in the front panel and then drop my customized controls with graphics also made in Edraw in place over that. It's produced pretty decent results. 

    • Like 1
  7. Say you have several  VIs that you are inserting into a subpanel display as pages, as an alternative to making them pages of a tab control.  Say those pages each have I don't know something that requires more than average CPU to draw like an X-Control that's getting updated a decent rate. Does LabVIEW use resources to redraw each of those diplay elements if the front panel is closed in this situation? 

  8. 6 hours ago, mje said:

    Yes, I think Python is a serious contender. I was shocked when they announced DAQmx support for it on Tuesday which will further strengthen the Python position.

    For me the relevancy is the relative ubiquity of microcontrollers, the tools to program them, and the cheap cost for low volume PCB manufacturing. I'm hard pressed to find any need in my research group where I'd prefer to use the NI RT platform over rolling our own design with bare metal programming in C. Cortex-M devices are capable as hell and the cost of printing/populating several custom PCBs is usually cheaper than a single NI RT chassis. I'd still go to NI if an FPGA application sprang up, but those are few and far between in my line of work. As for the desktop, LabVIEW has been losing ground for quite some time, but I still touch it from time to time.

     

    It would be cool if LabVIEW could gain some ground in the embedded world, instead of becoming more and more a high performance high cost niche. I think Python support is the right move. There are plenty of other DAQ manufacturers out there that support it and NI would lose out if they didn't. I'm learning python and am pretty impressed with how easy it is to get back to text programming with it after many years of LabVIEW. Text programming has come a long way from the days of the DOS based IDEs and C++ I learned back in high school.  

  9. Hi all, I would like to communicate to PROFIBUS DP device with a CompactRIO 9035. I did some research and I am not happy about my options. The cRIO module to talk to it is $1500 which seems outrageously expensive for what I want to do, which is talk to a single device at a 9600 baud. It also looks like you need to use it in FPGA mode, which will force me to have to start compiling bitfiles for my nice scan engine only project.

    It does look like PROFIBUS DP uses an RS-485 physical layer which is good. Theoretically I could just communicate to the device with my built in RS-485 port. However, I am not finding any PROFIBUS libraries available for download that work with a generic serial port. I believe the NI-PROFINET driver is tightly integrated to the cRIO profinet module. I am also not jazzed about implementing a PROFIBUS protocol from scratch.

    Does anyone have suggestions on good 3rd party adaptors or libraries that would allow me to communicate to this PROFIBUS device with a minimum of programming?

  10. Just spitballing here: maybe you could measure directly the electrical resistance with the 4-wire method of the part and correlate that to temperature using the temperature coefficient of resistance. Using a sine wave current at a particular frequency you can use a lock in amplifier which can detect small signals in where there is a lot of noise. I am assuming the part is metallic and conductive. You are putting TCs on it so you can get leads on it I assume.

    The other thing about using that technique is you can extract other information: the specific heat capacity of the part as a function of temperature. That all falls under the technique known as "modulation calorimetry" .

    Using TCs simultaneously would allow you to know the start and end temperatures accurately, with the high speed resistance measurement filling in the gaps in data.

     

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.