Jump to content

"DIY" Octave Band Levels VI


Recommended Posts

Hello,

I am working on code that will read a waveform of a sound pressure measurement and calculate the octave band levels. I would be incredibly grateful if someone could take a glance at my code to see what I have wired incorrectly...

The issue I think I'm having: The output of the "Basic Averaged DC-RMS" output wont index when I put it into the for loop. I tried to get around this problem by building arrays for each RMS output and then indexing into the for loop. Doing this doesn't allow me to graph the output... Any ideas as to why the "Basic Averaged DC-RMS" wont index directly?

 

The code (inspired by new NXG example) is attached, both VI's must be downloaded in order to run the "Octave Band Levels" VI. Also, I'm aware that LV has a toolkit for sound analysis & octave band analysis... but I do not have access to this toolkit, which is why i'm trying to write my own VI. 

Thanks in advance!

Octave Band Levels.vi

Waveform Butterworth Filter.vi

rmswont index.png

Edited by cpipkin
trying to be more specific with problem...
Link to post
Share on other sites

Well did you read the description of the error, or look at the data types of the terminals and indicators?  Your output is a 2D array of a cluster, and in that cluster is a 1D array of doubles.  The Waveform graph can accept multiple data types, but not this.  If you read the help on the waveform graph you'll see for a single plot you can either provide a waveform data type, or a cluster with the x0, deltaX, and Y values.  You can also have multiple plots by providing a 1D array of these values.

But honestly after all of this descriptions of what is what, the simple fix is to right click the for loop tunnel and select Tunnel Mode >> Concatenating.  This makes sure the 1D array going in, doesn't get indexed into a 2D array but instead concatenates them.  This can be wired to the graph on its own.  Also there is a bunch of inefficiencies in that code that can be fixed with some for loops.  Is this is how it is presented in the NXG example?  Like that for loop will only ever run once.

Link to post
Share on other sites

Attached is an updated version that I think does the same work but in a less Rube Goldberg kind of way, but I think this can be improved more.

Example_VI.png

  • Like 1
Link to post
Share on other sites

Nice. Thanks, Hooovahh. This certainly simplifies the code & makes it easier to look at. It also solved the graphing issue i was having.

Now that the graphing problem is fixed, I'm finding that this still isn't working... Perhaps it's because i'm bundling the band center frequencies instead of creating an XY? I will work on it.

Octave Band Levels_Simplified.vi

Waveform Butterworth Filter.vi

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By cpipkin
      Hello
      I am trying to save TDMS files that ideally contain the following:
      - 3 xy graphs (each containing two 1d arrays)
      - 1 waveform
      The problem i'm running into is that when I convert the xy graphs to waveforms, the x-axis is converted to time, which isn't real or useful to me. I've attached screenshots of what the XY graph should look like VS what it ends up looking like with the waveform.
       
      How to I make sure the x-axis is preserved so that I can save to TDMS?
       
      Edit: VI is included & pictures have been updated to better represent my code.
       

       
       

      TDMS Waveform Example.vi
    • By Reds
      I'm working on a project that requires sound output via the Windows audio subsystem, and so I've recently found myself using the LabVIEW "Sound" pallet vi's for the first time in.....well...ever!
       
      The sound output ("Play") vi's allow you to setup and configure an output "task".  As part of that configuration, you define a buffer size.  Once that sound output task is configured, you effectively write your sound samples into this buffer over time, periodically refreshing this buffer with new audio sample data.
       
      That's all fine and good, but unfortunately it seems like there is no mechanism to query the buffer status and find out if the buffer is about to overflow or underflow.  While this might not seem to matter if you're playing a sound file for a few minutes on a machine with lots of RAM, it definitely does matter if you're streaming live audio continuously through the system for several days or weeks. 
       
      If you refresh the buffer with new audio sample data at rate that is just slightly faster than the audio card's configured sample rate, then it seems like the buffer will eventually use up all available PC RAM.  if you are refreshing that buffer just slightly slower than the audio card's configured sample rate, then it seems like the buffer will eventually underflow and create a glitch in the audio output.  And since there is no way to monitor whether the buffer is trending toward overflow or underflow, there is no way to figure out how to adjust the rate at which you feed new audio sample data into the buffers.
       
      Am I missing something here?  Is the audio subsystem doing some sample rate conversion that I don't know about?  What is the proper way to ensure that the sound playback buffers do not overflow or underflow over an extended period of time?
       
       
       
       
    • By FrankH
      Hi All!
       
      Since LabVIEW uses only Windows MME driver for sound in and out, I'm looking for an other way to get sound data into LabVIEW with much less latency.
       
      Has someone use ASIO4ALL for audio input?
       
      For sound out I use Midi via MME and the Windows own synthesizer. Its latency is really not high. But the input data comes with some hundred milliseconds lateness.
       
      With MME: I tested on 2 systems. About 350ms latency and not really depending of the system load. I tried also with small sample packages of 600 and 1200 byte/chn and a sound card acquisition rate of 96kS/s. In order to collect this data amount the time what the PC needs should be theoretical 6,25/12,5ms plus data transfer time and some reaction times of the system modules. But the response time of MME seems in-depending on it. It needs always around 320...380ms. And the processor load was very low.
       
      Maybe someone can help me to use a faster software interface (ASIO?).
       
      Best greetings,
      Frank
    • By patleb
      I have been using a Developers Suite for quite some time and have found some of the VI's in the signal processing tool kit very helpful. I am not involved in video processing at this point; lots of underwater acoustics. There are many VI's I have not used and there are limited examples in many cases. I have not found current resources that discuss DSP applications for most of the Vi's. What resorces are available?
      Thanks.
      pat
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.