Jump to content

patufet_99

Members
  • Content Count

    48
  • Joined

  • Last visited

Everything posted by patufet_99

  1. Hello, in the LabVIEW Modbus API Master example there is no error handling. Is there some function or method to explicitly check the status of the communication In case of communication interruption, Modbus Server restart or other modbus error,? Should that be done by parsing the error codes if any when one read/writes registers and then re-trying the connection? Thank you for your help.
  2. Thank you for your comments. This seems to be as you mention and due to DNS timeouts as explained in the 2.2 of the upper link. JKSH: This is on the RT side, not the PC: if a non valid DNS server is defined in the PC there is no problem. As you say, section 2.2: So this is clearly the expected behaviour. I just wondered of the reason: the call to shared variables on the RT side are all Target Relative to the local target. It's on the PC side that the PC are accessed through direct IP by programmatic Access.
  3. We have an application where we communicate to a sbRIO target through Shared Variables. When the host/target are both on the local network all works fine (both are configured with a Static IP address). When both are connected directly with an ethernet cable, the deployement of the Shared Variables on the sbRIO target is extremely slow (5-10 seconds). This behaviour happens only if a DNS server address is defined. As the Shared Variables are deployed locally on the sbRIO, why the DNS server (if there is none) should have any effect on the deployment? Any hints?
  4. Thank you for your answers. The behavior that I observed by taking the "sizeof" structures is exactly as you described: A structure like this one: typedef struct tagINTERFACEPARAM { WORD wSendPortNo; DWORD dwTimeout; } INTERFACEPARAM, *LPINTERFACEPARAM; left two empty bytes after the WORD. While typedef struct tagCONNECTPARAM { WORD wSendPortNo; WORD wRecvPortNo; DWORD dwSize; } CONNECTPARAM, *LPCONNECTPARAM; while this one did not left any empty byte.
  5. Thanks for your reply. I cannot change the DLL. I guess that I will have to insert dummy bytes on my LabVIEW structures to get the correct data.
  6. To use a controller from LabVIEW I have to use some functions of a DLL. For one of the functions, according to the header file .h there is a structure data with parameters of different types that I have to pass to the dll. Some of the parameres are BYTE (1 Byte) and WORD (2 Bytes). When compiling this kind of structure with Visual C++ and looking at it's size with "sizeof()" it seems to me that 4 Bytes variables have to start in a position multiple of 4. For example if there is a BYTE and then a DWORD, the 3 Bytes after the BYTE are ignored and the DWORD starts at Bytes 5 to 8. When defining a LabVIEW cluster to match the DLL structure, will LabVIEW do the same? If in my cluster there is a U8 variable and then a U32, will anyway the U8 take 4 bytes? Thank you.
  7. Some years ago it was possible from LabVIEW to interact with EXCEL with the DDE functions. I do not know if this still works in recent versions of LabVIEW.
  8. Thank you for your comment. That would make sense.
  9. Hello, I have a vi, where after closing the file there is a check if some parameter has changed, so that the filename should be changed accordingly. What is done is simply create a new path with the current parameters, and check this path with the original path of the saved file. If both paths are identical, nothing is done. If the paths are different then the file is renamed with the "Move" VI. The vi works as expected most of the time, but where the executable is installed by a customer, error 10 is returned occasionally. I do not understand how this can happen: if the path is duplicated then the result of the comparison should avoid the file name change. In case that another file with the same name already existed, the returned error would then be different. Any hints of what could be happening are welcome. Thank you. Regards.
  10. I thing that I have struggled with is that the Editpos of the cell is incremented after that the Event has been filtered, so that the KeyFocus should be done on the same cell and not increment it. Otherwise it will be incremented twice.
  11. In an application I would like to use filter events to fill-up a table. The idea is to be able to decide what happens depending on the pressed keys, arrows, escape, etc. Additionally the input value should be checked so that depending on the row the input values are in the expected range or correspond to the expected type. This is the reason to use the filter event so that unwanted keys or characters can be discarded will doing the input. The problem that I have is that when the Enter, or Return key is pressed to accept a value, the value has not been changed yet (because it is a filter event I guess). The behaviour can be tested on the attached snippet. Is there a way to know the input value of the cell on the event? I'm using LabVIEW 2014 SP1 3f 32 bit on Win10
  12. I have found a way to do it with the FindCtrlWithKeyFocus VI Invoke method. Aknowledge_Controls_Input_2.vi
  13. For an application I am trying to do a VI where the user must input several Controls (not all of them). The user could either change the value or not but he should aknowledge it. Before the acknowledment the control background is red, and it is after set to white. Here attached there is a VI in which I do that by key focusing on the controls and cheking with an event structure the "Enter/Return" keys. The event checks if the Tab key has been pressed and discards-it to avoid losing the Key focus. While this works, the user can accept the input with the mouse on the top left "Validate" icon (see attached picture) of the window (instead of using the Enter key). In this case the event is not filtered and the vi does not behave as I would like: -Is there a way to monitor (with the event case) the input validation with the mouse?, or is there a way eventually disable this validation with the mouse and coerce the use of the Enter keys? (for instance a turn-around is to disable the Toolbar) -When the user clicks to another window or Alt+Tab to another window, how can the focus to the VI window be monitored, so that the current control Key Focus can be activated again when the VI window returns to top? Thank you for your hints. Aknowledge_Controls_Input.vi
  14. mje, thanks for your reply. I do not have a CAR number, I will ask for it. The behaviour is the same than in my example, the grid and the graph flows faster than the time scale. In my application to take screenshots of the graph I send the data to a sub-vi through a reference and then the scales, history etc, copying the values with attributes nodes. The graph data of the sub-vi matches then the time scale correctly.
  15. This bug seems to be specific to LabVIEW 2012. LabVIEW 2011 and previous versions work as expected.
  16. I think that more than daylight saving it has to do with the conversion from double to timestamp being dont to UTC rather than to the local time. http://zone.ni.com/reference/en-XX/help/371361J-01/glang/to_timestamp/ As the display is shown in local time 4min35sec UTC = 1h4min35sec local (in Switzerland) Your reply was very useful, thank you very much. Patufet I just reduced the code of another more complex vi to try to reproduce the problem so that it is clearly shown. In any case if you set the loop time to 1 sec the problem is exactly the same. In the posted vi the waveform dt=1 sec. and it is updated every second.
  17. Hi ShaunR thanks for your reply. a) Loose fit is disabled (grayed out) in the waveform chart so I do not thing that is the problem. A curious thing is that if you stop the vi after 4min 35 sec, look at the properties of the waveform and click cancel, the graph gets corrected (it gets compressed of 25 seconds to the right)! b) It could be the daylight saving time, but I sitll do not understand why the current time minus the time 5 minutes ago, would give 1hour and 5 minutes of difference. Both times should include the 1 hour of daylight saving time. I have attached the vi saved in LabVIEW 2009. I do not know if the behaviour is identical with versions previous than 2012. Regards waveform_chart_time_problem.vi
  18. Hello, I am experiencing some problems with the time scale of a Waveform chart. I am using LabVIEW 2012 SP1 on a Win7 PC. The vi here attached updates the waveform every second and shows on the Y scale the seconds passed (modulo 60). The graph seems to flow more quickly as the X scale. After 4 minutes and 35 seconds the graphs is shown full, but the X scale is 5 minutes. What is going wrong here? Is there a problem with the code? An additional question is why the difference of the current time - initial time gives: 1 hour, 4 minutes and 35 seconds while it should be only 4 minutes and 35 seconds? Any hint is welcome. waveform_chart_time_problem.vi
  19. The task was inited with a "DAQmx Timing" vi set to (Implicit). Just by removing this "DAQmx Timing" solved the problem.
  20. Hello, I am using the "CI Freq" DAQmx to measure the frequency of a digital signal with a Low Frequency with 1 Counter. I do the measurement continuously and it works as expected. However for the first iteration this DAQmx vi seems to calculate the frequency based on the time between the task start and the first detected edge: arbitrary value. Is there a way to tell the task to start on an edge detection so that the first measured value is correct? What I do for instance is to simply ignore the first iteration. Thank you for your help.
  21. You are right, by defining a different port than the default as you described it works. Thanks a lot!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.