Referring to comments in here, I'm looking for a solution to the font issue in the Icon Editor.
Selecting different fonts does not seem to change the font in the Icon Editor, specifically in the Icon Text tab input.
I've had no luck playing with the Tools=>Options=>Environment=>Linux=>Use pixel-based font sizes, it seems like whatever I select, the font won't change at all.
This leads me to believe the issue is with the icon editor and the way it loads the fonts in Linux. I know that it works in Windows, but in Linux it seems as if the Icon Editor always defaults to a same font.
It does change, but for the worse if I select any of the builtin(?) fonts, LabVIEW Application / LabVIEW Dialog / LabVIEW System - in the Icon Editor Properties.
I've tried most of everything I found from internet plus then some. I've rebuilt the font cache, copied the small fonts .ttf file from windows etc. so I'm thinking the easy fixes are already exhausted.
Environment: LabVIEW 2016 on OpenSUSE Leap 42.3.
So, looking for help in LAVA forums for this.
I'm trying to insert some NULL values in a datetime field.
In the example, the DATA_INSERIMENTO field has a non empty value and it works correctly but DATA_INTERVENTO doesn't accept NULL.
If I use an empty string instead of null, the VI run without any errors but it fill the database field with a 1900-01-01 and it's not what I want.
If I use the DB Tools NULL VI it gives me another type of error, maybe 'cause I'm connecting a variant to a cluster of string.
If i use a Variant to Data VI for the NULL value it returns an empty string so not the result I need.
If use the string you see in the label at the bottom of my diagram in SQL Server manager, it works correctly.
How can I obtain the same result with labview?
My question relates to retrieving decimated data from the database.
Given the case where I have 1M X and 1M Y steps (a total of 1000000M data points per channel) how do I efficiently get a decimated overview of the data?
I can produce the correct output data be using
Select X,Y,Z,Float1 from P
GROUP BY Y/1000
This will output only 1x1000 data instead of 1x1000000 datapoints, one dimension quickly decimated Problem is that is iterates over all data (and this takes quite some time). If I do a different query to retrieve only 1000 normal points, it executes in under 100ms.
I would like to use a CTE to do this, thinking that I could address directly the 1000 elements I am looking for.
WITH RECURSIVE cnt(x) AS ( SELECT 0 UNION ALL SELECT x+1000 FROM cnt LIMIT 1000 )WHAT GOES HERE?; So if I can create a cte with column x containing my Y Indices then how do I get from this to directly accessing
Float1 FROM P WHERE X=0 AND Y=cte.x
SELECT Float1 from P WHERE X IN (SELECT x FROM cnt) AND Y=0 AND Z=0
Using the "IN" statement is apparently quite inefficient (and seems to return wrong values). Any ideas?
In addition, accessing even a small number of data points from within an 8GB SQL File (indexed!) is taking upwards ot 30 seconds to execute at the moment. With 1.5GB files it seemed to be in the region of a few milliseconds. Does SQLite have a problem with files above a certain size?
This is a package containing LabVIEW bindings to the client library of the PostgreSQL database server (libpq).
The DLL version 9.3.2 and its dependencies are included in the package. This DLLs are taken out of a binary distribution from the Postgres-Website and are thread-safe (e.g. the call to PQisthreadsafe() returns 1). As of the moment the DLLs are 32bit only.
The VIs are saved in LabVIEW 2009.
So this package works out of the box if you have a 32bit LabVIEW 2009 or higher on any supported Windows operating system.
Because this obviously is a derived work from PostgreSQL it is licensed by the PostgreSQL license.
A few words regarding the documentation: This package is meant for developers who know how to use the libpq. You have to read and understand the excellent documentation for the library. Nonetheless all VIs contain extracts of that documentation as their help text.
What's coming next?
- adding support for 64bit
- adding support for Linux (anybody out there to volunteer for testing?)
- adding support for MAC (anybody out there to volunteer for testing?)