Jump to content

JamesMc86

Members
  • Posts

    289
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by JamesMc86

  1. How are you transferring the code to the PXI?

    I think you are correct that the bitfile will be included with the VI (though I have to say I have not done this as a source distribution rather than an rtexe before. this means you may need to reopen the VI to link to the new bitfile rather than transferring the file separate with the VI. This is certainly the case wi rtexe but I may need to double check this with the source method.

    The other option is you can burn the bitfile directly to the FPGA rather than depending on the Rt vi to download it.

    Definitely in the VI:

    Note  The Open FPGA VI Reference function extracts the bitstream from the compiled FPGA VI or bitfile and stores the bitstream when you save the host VI. The bitstream contains the programming instructions LabVIEW downloads to the FPGA target.

    from http://zone.ni.com/reference/en-XX/help/371599C-01/lvfpgahost/open_fpga_vi_reference/

  2. I am not familiar with the microsoft training and certification schemes but a quick search shows a lot of similar courses at similar prices (3/4/5 day intensive courses for similar prices e.g. http://www.microsoft.com/learning/en/us/course.aspx?id=2559b&locale=en-us) so it is not that different. There are distance learning courses that you do over a year or two but the total effort tends to come out at a few weeks total.

    I am not saying that it should not happen with LabVIEW but the reality is that it is not something that is requested (although there are universities who do longer courses). The reason being is that LabVIEW programming is not the primary role for many LabVIEW users and so they are not going to commit 6 solid months to learning the tool inside and out when they should be working on the other 95% of their role. When people do use LabVIEW more then they will take more modules (getting through everything with some time to practice probably would take 6 months of time). So I am not saying that it shouldn't or couldn't with LabVIEW, I'm saying in reality it doesn't. (Trust me, I work on the tech support side and would love everyone to be a CLA ;) )

    It is hard to do what an experienced LV programmer can do in C.

    Show me another cross platform, parallel, fast development, HW integrated, fully supported language.

    Amen!

  3. Why should a .net course designed for noobs take one year covering most of the software engineer practical and academic issues while LV courses are 2 days long and cost almost the same?

    I wonder if Goop and G# courses are as shallow.

    What course are you referring to? This is the challenge, to teach the entire OOP ecosystem in 2/3/5 days is simply not possible. People need to go away and work with it over time. The challenge is explaining to a manager that they are going to have to wait a year for that project and spend a lot of your engineers time learning the concepts they need.

    Take anyone from this forum and let them give a course about a live large scale example and you'll have an exciting course that all would love to take

    I would love to sit that course! and I think this is where user groups and communities really enhance LabVIEW. There is no hiding that CLAs (and engineers at that level) are hugely important to introduce these topics over time to other engineers.

    The reality is that you guys on here are the metaphorical 1% (i.e. I don't know an actual figure). The vast majority of LabVIEW users are not creating large applications with LabVIEW, do not have the time to learn the tool in huge depth. LabVIEW is a tool for them, they need to spend 99% of their time doing physics research/engine design/testing parts, this is where the majority of courses are primarily aimed (i.e. for a lot of people this is all their LabVIEW learning time but for a full time developer this is only the start)

    I would like to see more advanced training and I believe NI are trying to build the ecosystem for this (as an attendee of the inaugral European CLA summit this year, it went down very well) but as you say I don't think a 2/3 day course taught from rigid "one size fits all" material is the answer.

  4. Hmm you may be disappointed with the software engineering course if you are looking for specific test code examples. It discusses the types of testing and has some exercise on unit testing using the unit test framework.

    I reality I don't know if that many graduates do have OO skills. My course (systems engineering) only had an OO optional module (everyone does C) and even then OO design isn't touched in a massive way. May graduates coming from mechanical engineering, civil, physics, electronics are in the same boat and these are the people we tend to be teaching. The implementation doesn't worry me, any LabVIEW programmer should be able to use a class and most should be able to create one and I think anyone can be taught this. It is the design which is a tough concept to grasp and I personally simply did not get it until I had some practical experience. This is one example that I think we can't teach completely in the classroom.

  5. In general, most NI courses, I think, are outdated with no OO, design patterns, HAL, MAL, Actors, advanced TDMS, Gateway Requirments, Testing, G#...

    Most of the courses don't get more then 2 versions behind and teach the latest techniques (I.e. the brand new OOP course) which means that much of this is a conscious decision.

    You are obviously a big fan of OO and in good company here but this is something which is a difficult concept from a newcomer to LabVIEW (who is normally new to programming) so it is never going to be central to your core 1/2/hardware courses where many students maybe only developing simple lab or DAQ applications and don't have an interest in this subject unless it is necessary.

    I would say it would be nice to see some in core 3 and this is the level of programmer the OOP course is targeted at however this course does (and needs to) assume zero OO design knowledge, I think this is why it is separate, to do all of this in core 3 would make it an intense course and risks leaving some people disenchanted but I suspect we will see more come in as it becomes more widespread (there is an OOP solution to the course project on the CD).

    In terms of requirements and testing this is now covered in managing software engineering in LabVIEW which was introduced a few years ago.

    This is probably a good audience to ask how you find the advanced courses as well? My feeling is that some of this at this level also can only be gotten across well with communities and user groups where advanced users can share best practices and show many practical examples.

  6. I've not come across this specific error but think I can explain it.

    A DAQmx task is somewhat defined around it's timing engine i.e. one task will always have one time set up. The WLS devices has no way to share timing signals (naturally) so they cannot share one physical timing engine and that is why you cannot have them in a multi-device task. You will need one task for each device, although I would imagine this would actually help with your task!

  7. JamesMc86, if the operation is protected as a singleton you'll have the same bottleneck issues with every method you use.

    Using get/set version of pseudo FGV does not implement the singleton design pattern and has only a little advantage over a simple global variable.

    Absolutely, That a get/set FGV is far superior to global variables is one of the most oversubscribed myths around at the minute. As you described anything which is a central access becomes a shared resource unless there is some buffering to reduce the affect) and this means parallel access will have to be arbitrated between. I am not saying that this is a bad method, the point I am trying to make is that there is no silver bullet or any one technique that fits every situation and as Ryman suggested in one of his posts a mix of techniques is normally required.

    Your method would be described as a tag mechanism. It is good for when you need the latest value available on demand to many areas and you don't care about what the value used to be.

    However if you need to guarantee every value is recieved then a queue (or network stream between targets) is far more appropriate. There are different care-abouts and needs.

    Section 1 of the cRIO dev guide actually discusses the types of communications quite well although probably isn't a extensive discussion of all the methods available. Its at http://www.ni.com/compactriodevguide/sec1.htm

  8. Do you know what is the difference between bundle/unbundle from a class control and a local variable?

    The problem with using bundle/unbundle is that is won't work outside of the class or even in it's children.

    I think that is the key difference to be honest but that is critical to an OO implementation. It enforces encapsulation which allows us to produce code with low coupling between components.

    I wish NI made a fast basic class control data access which is OO, takes syncing into account, allows singleton and avoids races

    Why does it need to be OO? OO is a way of designing/organising your application. The standard data transfer mechanisms in LabVIEW can work with OO or not. You can put objects through local variables, global variables, FGVs, DVRs, Queues, Notifiers whatever you want. Just because you want to work in an OO way doesn't mean that you are restricted by communication methods. What you really need to decide is what access/sync/buffering you need and these would decide which of these methods are needed.

    As for your implementation it looks good if you need a variable/tag style access (no sync, latest value). By putting sequences of access in the in place structure you can avoid race conditions. You just have to consider if you have lots of access very fast in different parts of your application it could become a bottleneck as a shared resource.

  9. According to instructions, all -ve ends and the common pins must be grounded to a stable ground.

    I think you do not need to tie all -ve. I believe what the diagram is showing is using one -ve as your ground reference. Vcm is then a symbolic voltage that -ve2 is not equal to -ve1, otherwise you have RFSE/RSE

  10. I'm pretty sure only front panel item property nodes require a switch into the UI thread. This foes not force the whole VI into the UI thread (but they are pretty slow, a value property node is about 1300x slower than a local ore global variable.

    I am surprised to hear do much discussion of DVRs for data sharing. They are certainly useful but provide no synchronisation if you need it. I personally use them more for memory management.

    I also would say FGVs/LV2 globals are just as prone to race conditions! Any application where you have multiple data writers risk data loss with variables and anywhere with a non atomic read-modify-write risk race conditions (this can be achieved with an FGVS/AE but is not inherent.

    But this all detracts. What you need is a form of inter process communication. The exact data depends on the type of data you are transferring. Queues are the best for streaming continuous data and pretty good for commands. Shared variables/FGVs are good for latest value transfers but are prone to losing data if the reader gets out of sync with the writer

    (null)

  11. One thing to be aware of with the 9205 is that it is isolated. This means everything is referenced to ground on the connector not the chassis. Try tying this to a stable ground and see if it improves

    (null)

    Also just be aware the DMMs are slow and often don't show noise, not that I doubt you but it alone doesn't rule out noise on the line

    (null)

  12. Hi Flinstone,

    I have seen real time (although on VxWorks, not PXI) struggle with large directories before, if you can break it down this is probably best.

    The only other thing I could suggest is maybe trying the file transfer through the web interface. If it was protocol related then this uses a different technique (something over HTTP) but if it is the number of files you will probably see the same thing again.

    Cheers,

    James

    On trying to find a recommmended number I found this: http://digital.ni.com/public.nsf/allkb/C9B0A1443BF1C3398625760A004DB976?OpenDocument seems 'significant' means a few hundred files

  13. For standard accounts you only have read access to ProgramData, not read/write so if the application is to be run by someone it would have to be elevated to admin before being able to write. Rolfk: I wonder whether you have always had admin priveledges or I believe if you have just one user there is some kicker in Win7 which gives the user that creates the folder R/W permissions (I am not sure about this, someone may be able to elaborate better).

    The recommendation from Microsoft is that your installer should set permissions for your folder entry in here if you need it. This is not straightforward from LabVIEW but I believe it can be done with a batch file. Another option is that you can sit a mainfest file with your exe which tells Windows that your application must always run elevated (but I suspect this is not recommended practice!). This appears to have caused confusion and problems for programmers across all languages!

    Mads link in the reminder is pretty good and will have to keep it on stock. Here it is again and reading the comments you can see the frustration this can cause! http://blogs.msdn.com/b/cjacks/archive/2008/02/05/where-should-i-write-program-data-instead-of-program-files.aspx?PageIndex=2#comments. Primarily because you will have no such issues with the equivalent in XP.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.