Jump to content

ShaunR

Members
  • Posts

    4,858
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. We tend towards a RAID system with hot-swap and (on the server) a hot spare rather than a mirrored system (it's interesting, though). The issue we run into is the plant won't even look at the lights on the RAID, look at the monitor (which, admittedly, we try to keep hidden from overly curious users), or listen to the persistent beeping that lets them know things have gone south and they need to pull a spare hard drive out of storage and switch with the bad drive.

    Indeed. But that is more to do with recovery than isolation. All machines (production included) that we spec nowadays are raid (invariably Raid 10 depending on chassis) and that's irrespective of the topology. Hard drives are cheap!

    Getting production to check stuff is really a maintenance issue. The only way it can be practically resolved is with training and procedures. I work closely with customers' maintenance departments and supply checklists and procedures along with [preventative] maintenance schedules. This however is insufficient and unless entries into maintenance logs are a requirement - always skipped. A maintenance log is the first port of call and the second is shift "handover" procedures (the latter usually being a mixture of hardware and software checklists). But unless you get someone to sign or initial something (make them feel responsibility for the equipment) then you are on a hiding to nothing.

  2. We have a production control, SPC, reporting and analysis package that we sell that requires a server to run. Given the option (plants and plant IT have interesting ideas of how things should be at times), we set up the server as a gateway with as many roadblocks as possible. One customer has a variation on this where there is a 1U rack-mount PC that serves as a gateway. This customer also removes all USB ports and keyboard and mouse (the system has a touchscreen). One other thing I've seen is dual-boot the PC (or use a bootable memory stick) with Windows as the test system and a flavor of Linux to perform system maintenance and repair, and (re)imaging of the hard drives.

    Yup. this is the "dirty" PC that I mentioned.

    The goal is to provide isolation (or gateway as you put it) where you can limit exposure and run all your virus and malware scanners et al.

    In the past, I have set up servers using two PCs. One that the production line stores and gets it's configuration from (SPC database) and a mirrored one for access from the offices (a read-only copy, if you like). This means that changes can only be made on the configuration machine, but anyone (who needs it) has access to the data (usually via a web interface and a web API). Periodically, each will scan the other (and sometimes the production machines dependent on the topology) for viruses , malware and any anomalies as well as their own drive. If something does get on either machine, there is an intrinsic back-up of the data and a high probability that at least one of the machines will detect it. Additionally (although nothing to do with virii), if one machine goes down, you have a ready replacement that you can just hook up in minutes whilst you wait for a new one to arrive (I have also managed to automate this with alerts to the admins, telling them of the fact, meaning seamless transition - 0 downtime). The hard part is getting them to order the new PC since it all still works....lol.

  3. Being a programmer that mainly deals with automation machines (some of which could take limbs off) I have always been aware that if my software wasn't working perfectly, or, if some malicious person fiddled with the code. The effects could have extreme consequences both in terms of injury and hardware failure. So it was no surprise to me when I read How Digital Detectives Deciphered Stuxnet.

    Since LabVIEW is extremely hardware oriented and specifically targeted at the sorts of applications that Stuxnet and the later Flamer could target. I thought I would post some of my thoughts.

    Writing automation systems has made me a bit of an amateur virus enthusiast, not least because if a virus gets onto a machine; I'm the one that has to sort it out. As I have considered the scenarios over the years it, I have done what I can with my limited knowledge to make it as difficult as possible for a virus to get onto a machine and, if it does, that it does limit it ability to spread to others. We all know that Windows is a hot target for any "script kiddie" and, whilst not a silver bullet and certainly not protection form the most determined malicious user. The items below are simple to achieve and go some way to making it more difficult for accidental infection and limit the scope of the propagation.

    • Turn off "Autoplay" - The easiest way to for a virus to propagate via USB.
    • Enable Extensions to be visible - It won't stop a virus, but it may prevent someone (including yourself) from clicking on one.
    • Change the "*.vbs" extension from "Open" to "Edit" - Make the default action for visual basic scripts open in a text editor rather than run. Most "script kiddies" rely on this and the extension hiding to trick the user into executing malicious code..
    • Place USB ports and CD drives behind "Locked" panels - Use USB ports that are at the rear, not exposed and that are internal to the cabinet of the machine and disconnect those on the front at the motherboard. Only allow USB/CD access to "trusted" and knowledgeable staff and use USB drives specifically set aside for the machines (one per machine). Insist they must be scanned before use (use the maintenance log!).
    • Boot into your program as the shell - This will remove all the tools that people are generally used to when dealing with windows (like the task bar) and only enable the access you program into the software. If you also disable "CTRL+ALT+DEL", not even explorer will be available (if they know how to get to it without the explorer shell).
    • Run your software under a User account and not under an Administrative account - Once installed and operational, your software should not require admin privileges to operate (as a design goal). Use a User (or if you can-a guest) account and specifically grant privileges that your software requires. Auto-login to this account.
    • Take a "Disk Image" of the vanilla install of the machine when the machine is isolated from others. Aside from backup, this is a convenient way of completely removing a virus once infected. Make sure you don't have a virus first!
    • Run the machines off of a separate, isolated network - Connect the machines on their own network with their own routers. If a virus gets onto your systems, or, if the IT dept gets a virus (more probable), then the other network will not be affected. If access must be provided to the office infrastructure, use FTP access (preferable) or dedicate a "dirty" machine (not one of the production machines) to act as a sentry.
    • Don't let the office IT dept anywhere near the machines. :) - From experience, most IT departments will not support the machines but they will still insist on pushing a load of corporate profiles and updates that may bring the system to halt (there have been a couple of exceptions, but the vast majority won't have anything to do with something they neither have the knowledge for or control over). Most network propagated virii are introduced via the office networks and usually IT just shrug their shoulders and leave you to sort it out with one hand tied behind you back due to their security policies. Production machines going down due to a virus actually cost money and a lot of it. So submitting a help ticket that they might get round to eventually is not an option. It is far better just to close off that attack vector and not involve them at all (if you can. ;) )

    I view virus scanners as the last line of defense and, for some machines, a scanner cannot be installed.Therefore, these are a few of the simpler things I routinely implement.

    So how do you try to mitigate malicious code?

    • Like 1
  4. like forcing the FP into memory in the plugins when you use property nodes on the control references or making the property nodes run in the UI thread, maybe ShaunR can answer this. However, if the FP doesn't need many updates (real time graphs) I think I prefer my design even with some drawbacks since it is clean and easy to maintain.

    Not so much loading FPs into memory but definitely running in the UI thread,

    "Property nodes for controls always run in the UI thread" -- true

    Anything under the "VI Server" category will always run in the UI thread. This includes VI references, Application references and all panel/diagram object references. If you are making a lot of these calls, you might consider moving them to a subVI that can be set to run in the UI thread. Other categories, like VISA, will use any thread. ActiveX has its own rules for which objects can be accessed from which threads. (If you don't have to know about apartments, trust me you don't want to.)

    "Any control reference causes the panel of the VI containing the control to be loaded" -- true

    "Any control reference will cause the panel of the VI containing the control to be included in built applications" -- false

    This is related to the earlier comment about the documentation "Loads the front panel into memory" characteristic. This means that when the operation runs it forces the panel to be loaded. This is different than things that cause the panel to be loaded immediately when the VI is loaded, which is different than things that cause the application builder to include the panel by default.

    reference

  5. I use string messaging for passing data between UI and action loops/VIs. The command channel is usually a queue and updates are via events.

    Basic setup is that the UI only handles operator input and control updates (usually only control name and data). As in your case, those strings then get passed to either another loop in the same VI or, more often than not, a dynamically launched "controller". (strings will typically be of the form "VIName->ControlName->Command->ControlData")

    This is straight forward for most controls, but the tree view is a bit of an anomaly. To get round the property nodes being required to manipulate the control, I flatten the controls reference as part of the data payload The controller knows how to interpret this payload, retrieves the reference and updates the control according to the command.

    For treeview type controls this often means that those controls have a controller loop/vi in the top level VI specifically for them since I generally consider the Top level VI to be only for UI interaction and it separates other processes that can be run (as I previously said-dynamically) without forcing the front panels into memory or risking them being run in the UI thread.

    Since I consider the top level VI to be for UI. Why bother with a separate controller loop/VI. The reason is that you can also run a TCPIP process and, using exactly the same string commands, control it remotely.

  6. The biggest problem with using string for common data types is that it leaves the formatting of that data up to each individual Serializable class to define the format. If you have N objects encoded into a file each of a different class and each one has a timestamp field, you can end up with N different formats for the strings. On the other hand, if we give Formatter alone knowledge of the timestamp (and other types of interest), it can have methods to control the formatting and parsing, and then we leave those off of the PropertyBag class. I'll draw it up and see what that looks like.

    Hmmm. I'm not sure what you have in mind (need to see the diagram I guess). The serializable just has a time (in whatever base format you like-integer, double, etc.....but something useful since that will be the default) it's just of "type" string. The "formatter" is still the modifier from this base format. The default serialize is obviously whatever you decide is the base. But that can be overridden by the formatter to produce any format you like.

  7. CAN Devices are DAQ devices or they do something else?

    Anybody can help me.

    I have seen USB8473 High speed can module. need to know more info about it general CAN devices functionality.

    thanks

    CAN is merely a communications interface like ethernet, RS232, RS 422, STANAG 1553 etc.

    The USB8473 is a converter to enable you to connect to a device that has a CAN interface via the USB port of your computer.

  8. Haha, cool. More numbers. So with a single read and single write, I get 30 ms frame time. Subtract out the 5ms transit time over the serial bus and that leaves 12.5 ms per VISA operation.

    Since my test is fixed at 4 bytes received, I broke the read operation down to four single byte reads, for a total of five VISA operations per frame. Wouldn't you know, that leads to a frame time of 70 ms. Knock 5 ms off that for the communication time leaving 65 ms, 65/5 operations = 13 ms. Seems to be constant behavior of 12-13 ms per VISA operation.

    Right click on the read and write VIs, set them to "synchronous" and you might be able to get that down a bit.

  9. > I presume your reluctance to support many of the types in LabVIEW

    ShaunR: That's part of it. Just as large a concern is the complexity added to developers of Serializers having to work with all the types, and the work that Formatters having to handle all of the types.

    I do keep looking at JSON's 5 data types and thinking, "Maybe that would be enough." But I look at types like timestamp and path, and I know people would rather not have to parse those in every serializer or serializable, and *those* *aren't* *objects*. That historical fact keeps raising its ugly head. They don't have any ability to add their components piecemeal or to define themselves as a single string entity.

    I would actually argue that maybe 1 type is enough and the problem is purely string manipulation. However, that excludes the binary (hence my suggestion).

    My JSON VIs, the JKI config file and the rather splendid library posted in the Setting Control Property By Name thread are all about "untyping" and "re-typing". I have found strings far superior to any other form for this since all labview datatypes can be represented in this way and, for human readable, have to be converted to them anyway, The introduction of the case statements support for strings has been a god-send.

    I'm not sure what you mean by " They don't have any ability to add their components piecemeal or to define themselves as a single string entity.". They are still just collections of characters that mean something to humans. And we are not talking about adding functionality to an existing in-built object are we?

    .

    Indeed, things would be much easier for this application if scalars and composite types like timestamps and paths were objects, but of course that would open such a huge can of worms for practically every other situation that I shudder to think of how NI could ever even consider moving from that legacy.

    Out of curiosity, I'm wondering if there's a creative way of handling arrays with the scripting magic? Maybe upon coming across an array some method is called to record info about the array rank (number of dimensions, size of each), then the generated code would loops over each element using the appropriate scalar methods to handle serialization of each element? This could potentially allow support for arrays of arbitrary rank, but would likely be slow as it would involve serializing each element individually.

    Just thinking aloud for now, I don't have time to really think it through thoroughly yet.

    N rank arrays are fairly straight forward to encode and decode if using strings as the base type (it's a parsing problem solved with recursion where the minimum element is a 1D array). Size and dim is really only required for binary, and the flatten already does that. The real difficulty is decoding to labviews strict typing since we cannot create a variant (which circumvents the requirement for typed terminals) at run-time. We are therefore forced to create a typed terminal for every type that we want to support and limit array dimensions to those conversions we have implemented.

    I think maybe you are looking at it from the wrong end. Timestamps and paths are really, really easy to serialise and so is the data inside an objects cluster (we can already do all of this). In fact. Paths and timestamps are objects, but, apart from their properties and data-not a lot of good to properly serialise since we cannot create them at run-time (I've been dreaming of this for decades :) )..

  10. AQ.

    I presume your reluctance to support many of the types in LabVIEW is down to reconciling the speed and compactness of binary with the easy (albeit slower and bloated) representation of portable text based. Perhaps a different way of looking at this is to separate the binary from the text based serialization. After all. Aren't classes just XML files?

    All scalars and objects can be represented in XML, JSON and even ini files since the standards are well defined. The string intermediary is a very good representation since all types can all be represented in string form. An API with only these features would be invaluable to everyone including us muggles (jki config file VIs on steroids). We could then add more formats as the product matures..

    The flatten already accepts objects but just doesn't quite serialize enough. That could be addressed to provide the binary.

    This is actually one feature that would budge me from LV 2009

  11. Wow that guys had alot of time and dedication for that. Here's the site which explains a little about what it is.

    http://www.kshif.com/lv/index.html

    I expected cool scripting or something to get those attributes but when I dug down saw lots of case statements for each attribute, for each control type.

    Yup indeedy.

    There are some optimisations you can do to reduce the cases. All controls have a "label" and "Caption" of type text (single case), U8,U16,I8 I16 et al can have one case etc. But as soon as you get to graphs and the more complex controls, you pretty much end up with a case for each property/type

    That implementation is a lot cleaner than mine though :worshippy: but, in my defense, mine does produce JSON strings (for obvious reasons)

  12. Don't you just love strict typing ;)

    I've got a control scraper and it's counterpart which sets the values. Unfortunately it's part of the websocket API, so can't give it to you.

    Basically you have to use variants to get the control type and use the controls ref with the appropriate value property type to set it (case structure with a frame for almost every type). This means that it runs in the UI thread which is a real downer.

    You can encode the type in your string if you want it to be generic for transmission or you can use coercion to force values to the target controls type (different type of genicity).

    I would also suggest JSON rather than a comma delimited string :rolleyes:

    Alternatively, you can wait for the serialization VIs in the other thread. :D

  13. I'll try to remember to bring CDs with me and if I remember (and have a compliant relative), I might be able to actually test it in the development environment anyway. Any idea if LabVIEW can be as deeply-rooted on OS X as it is on Windows? I don't want to mess up someone else's computer.

    I think I have Snow Leopard VM somewhere. If I find that, I'll let you know.

    No idea about the LV bit.I just clicked on the *dmg and installed it since this is the only reason for me to use the Mac (I'm not even sure I know how to uninstall it :P ).

    PM me an email address and I'll send the API anyway so that if you do get something working; your ready to go.

  14. If you can (or want) to build a test application that I can use with just the run-time, I can test it on another version of OSX this weekend. I'll have to check to be sure what version (I think Snow Leopard), but let me know if that's valuable to you.

    All testing is valuable. I can compile the speed test which will also exercise the AES encryption as well as give a benchmark on your MAC (I'm running in a VM). Let me know which run-time engine you have (2009, 2010 or 2011) and an email address and I'll send it to you (I don't know how big it will be, but knowing LV a few MB-if thats a consideration for your mailbox limits).

    ......a bit later....

    Scrub that. I don't have the app builder for Mac :frusty: . Thanks for the offer though.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.