Jump to content

Youssef Menjour

Members
  • Posts

    80
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by Youssef Menjour

  1. 38 minutes ago, X___ said:

    I wished I could be as excited as this deserves, but with the abysmal transformation of NI in a money machine making it impossible to get support for something as simple re-activating a permanent license, I am on my way to Python...

    It's a shame because our library is really disruptive.
    Being able to easily integrate any DEEP model into LabVIEW architectures is a pure joy.

    One year to convince that is possible.
    One year of hardwork to develop the Graphical Deep Learning Library.
    One year of difficulties.
    One year to do it.

  2. Hello everyone,
     

     

     

     

    It is now time for us to communicate about the project. First of all, thank you for the support that some of you have given us during this summer. We didn't go on vacation and continued to work on the project.

    We didn't communicate much because we were so busy. 

    HAIBAL will be released soon, with some delay but it will be released soon.

    We have solved many of our problems and are actively continuing development. The release 1 should be coming soon and we are thinking of setting up a free beta version for the community to give us feedback on the product. (What improvements would you like to see!)

    For the official release, we might be a little bit late because the graphic part is not advanced yet. We still have to make a website and a youtube channel for the HAIBAL project. I don't even talk about the design of the icons which has not been started yet.

    In short, the designer works a lot.

    In the meantime, here is the promotional video of HAIBAL !

    See you soon the community ! Be patient, the revolution is coming

    HAIBAL link.png

  3. Hello everyone,

     

     

    It is now time for us to communicate about the project. First of all, thank you for the support that some of you have given us during this summer. We didn't go on vacation and continued to work on the project.

    We didn't communicate much because we were so busy. 

    HAIBAL will be released soon, with some delay but it will be released soon.

    We have solved many of our problems and are actively continuing development. The release 1 should be coming soon and we are thinking of setting up a free beta version for the community to give us feedback on the product. (What improvements would you like to see!)

    For the official release, we might be a little bit late because the graphic part is not advanced yet. We still have to make a website and a youtube channel for the HAIBAL project. I don't even talk about the design of the icons which has not been started yet.

    In short, the designer works a lot.

    In the meantime, here is the promotional video of HAIBAL !

    See you soon the community ! Be patient, the revolution is coming
    1891403289_HAIBALlink.png.d4e845c8bb3a53aa204b2277bd438ba9.png

  4. On 6/3/2022 at 4:24 PM, Neil Pate said:

    Maybe completely unrelated, but I had a similar issue some time ago. I had some software which just did not run on a certain PC, it gave some weird error message. The error was actually in the LabVIEW "Mean.vi" which is part of the advanced analysis libraries.

    See this thread for my post.

    To cut a long story short, I had to add a setting to my PC environment variables. Doing so allowed the Math Kernel libraries to work with my CPU. Something similar to this

    MKL_DEBUG_CPU_TYPE=4

    image.png.3ef39daa70921d9e2090dfdba30ecde8.png

    Thank you for your help but I'm not sure thats the good way to solve this problem because if i well understood, you propose to make modification on my configuration machine to make the VI well working.

    It's not user friendly if i want to use MKL inside an exported library.(Or have to script it to automatize the installation)

    By the way i looked also about the dependancy 

    image.png.6b84aa61e0302dcd601048b7861eac23.png

    I found MKL_intel_THREAD.2.DLL (C:\Program Files (x86)\Intel\oneAPI\mkl\2022.1.0\redist\intel64)unfortunaelly moving this DLL doesn't worked. (Well tried !)

     

    image.png.635650f5cea32d1f0b834ccc64f98e9a.png

     

    --> i suppose MKL_intel_THREAD.2.DLL is calling other DLL so i have to scan this one to know dependany etc etc ... --> maybe a better way to solve this one (edit : done uper image 🤠 --> tried and failed)

    image.png.dc0cea3cc2dd08e0c71f45052d597155.png

     

     

    Is it possible to script  PATH environment variable modification to make it more acceptable ? (I already have the answers - It's yes, but my actual knowledge on this subject are low)

    We can inspire from  DNNL library (another library inside the intel package) -  Intel propose a script to fix environnent variable but when i launch it on my cmd console seems does not work.

    I suppose i make it bad.

     

    vars.bat

  5. 21 hours ago, dadreamer said:

    In fact your app requires the libraries on the very first level shown in Dependency Walker. The others are needed too, but they are being loaded by their "parent" DLLs, so you don't need to worry about them. Could you collapse that DLL list on the left of the DW, so we could see the first level libraries except sycl.dll and kernel32.dll? Don't look at those red warnings as they are about not finding some libraries, which likely are loaded during the runtime or have delayed loading or could even be absent in the system (as some .NET assemblies) and the app is able to run fine without them. So if only sycl is necessary, then you could either provide the path to it in your PATH environment variable or try to put it near your own DLL and check, whether LabVIEW loads it all without errors.

    It works !!!  💪 SOLVED

    image.png.2213b66851a12f68610a52e41152950c.png 

     

    I put sycl.dll at the same place of my called DLL and no error LabVIEW well worked !!!

    Dadreamer many thanks !!!!!!! 💪 SOLVED

    18 hours ago, Neil Pate said:

    Maybe completely unrelated, but I had a similar issue some time ago. I had some software which just did not run on a certain PC, it gave some weird error message. The error was actually in the LabVIEW "Mean.vi" which is part of the advanced analysis libraries.

    See this thread for my post.

    To cut a long story short, I had to add a setting to my PC environment variables. Doing so allowed the Math Kernel libraries to work with my CPU. Something similar to this

    MKL_DEBUG_CPU_TYPE=4

    image.png.3ef39daa70921d9e2090dfdba30ecde8.png

    Ok let's now try to solve this one now !! 

     

  6.  I have the same problem (LabVIEW error 13) when I use a function from the MKL library (math kernel library)

    There must be a file dependency issue that is not loaded in the DLL runtime.

    The question now to solve the problem is: How to add its dependencies properly.

    🤔  Another question comes to mind:

    Logically as it stands, my DLLs should not work if I call them with C code (this is normally independent of LabVIEW) --> I will check (need to find how to call DLL in c code)

    If this is the case then our solution is to include all runtime dependencies (which is of course possible - you just have to know how to do it)

    One thing is sure, I will have learned a lot!

  7. 13 hours ago, dadreamer said:

    Maybe the reason is that LabVIEW cannot find some DPC++ Runtime library during the DLL load process (sycl.dll for instance or another one). Could you check with Dependency Walker, which libraries are in dependencies of your DLL?

     

    It seems that you are right (which reassures me because your reasoning is logical)

    Here is the screenshot of the "dependency walker"

    are there routines to integrate into my code in order to remove these objections?

    Error: At least one required implicit or forwarded dependency was not found.
    Warning: At least one delay-load dependency module was not found.

     

    1880850499_dependancywlaker.PNG.8ee31c57ff4854c0e3ee770270c1d865.PNG

     

    As a reminder

     

    Header.h

    ///////////////////////////////////////////////////////////////////////////////////////////////////////////

    #pragma once

    #ifdef DPCPP_DLL_EXPORTS
    #define DPCPP_DLL_API __declspec(dllexport)
    #else
    #define DPCPP_DLL_API __declspec(dllimport)
    #endif


    extern "C"  __declspec(dllexport) DPCPP_DLL_API int __stdcall  Mult(int a);

    ///////////////////////////////////////////////////////////////////////////////////////////////////////////

    dllmain.cpp

    ///////////////////////////////////////////////////////////////////////////////////////////////////////////

    #include "pch.h"
    #include "Header.h"

    __declspec(dllexport) int Mult(int a)
    {
         return a * 3;
    }

    BOOL APIENTRY DllMain( HMODULE hModule,
                           DWORD  ul_reason_for_call,
                           LPVOID lpReserved
                         )
    {
        switch (ul_reason_for_call)
        {
        case DLL_PROCESS_ATTACH:
        case DLL_THREAD_ATTACH:
        case DLL_THREAD_DETACH:
        case DLL_PROCESS_DETACH:
            break;
        }
        return TRUE;
    }

    ///////////////////////////////////////////////////////////////////////////////////////////////////////////

     

     

    dllmain.cpp Header.h

  8.  

    8 minutes ago, dadreamer said:

    Just switch to Release option on the MSVS toolbar. As to the differences, you may read about them e.g. here . To add to there, debug builds depend on the debug version of Visual Studio Runtime libraries, whereas release builds depend on common MSVCRT DLLs, that are very likely already installed in the system. Hence if you compile debug app or library and deploy it to machines without Visual Studio installed, it will ask you for the debug DLLs or even the whole Visual Studio to be installed. Release app/DLL on the other hand usually requires Visual C++ Redistributable Runtime only.

    Thanks a lot ! Unfortunattely it doesn't solve the problem.

    This issu is very strange ! 

  9. 3 minutes ago, dadreamer said:

    For MSVS - on the left of the function name, for LabVIEW - in the CLFN settings.

    extern "C" __declspec(dllexport) int __stdcall Mult(int a)
    {
      // will be exported with stdcall convention
    }

    Thank you for the information but unfortunatelly does not solve my problem

     

    Regarding the release build, I haven't tried it because I didn't understand the difference and how to make it work. Could you be more explicit?
    "this Shareable board exclusively owned." is very strange

  10. 7 minutes ago, dadreamer said:

    Did you try Release build also? I noticed, that you're exporting Mult function without explicit specification of the calling convention. By default Visual Studio exports in cdecl convention. But in LabVIEW you set stdcall convention. Likely it's not the reason for errors though.

     😱 😱 😱 😱 😱

    Where can i specify the calling convention !!! (really sorry for the question)

     

  11. Hi Everybody,

     

    Intel recently released a DPC++ (data parallel c++) compiler that optimizes speeds for Intel CPUs and GPUs.

    My problem is that when I compile the functions with the normal Intel 2022 compiler (or the classic Visual Studio compiler) there is no problem and when I use the new intel DPC++ compiler LabVIEW returns an error.

    Both Intel compilers work perfectly in C and C++ under visual studio. For the exemple, i made a simple function that just multiplies an int32 by 3 and returns the result as an example.

     

    The DPC++ compiler is only under the X64 architecture and I use LabVIEW 2020.

     I made a video to show the problem

     

     

     

    File here: https://we.tl/t-9Iwkf1IGvr you can compile by yourself and see the problem. I added all Visual studio 2022 + Intel Compiler + Intel DPC++ compiler installers in the "install" repository.

    (on visual studio alt-F7 to go directly to the parameter and change the compiler - F5 to compile)

     

    Is someone can tell me what's wrong ?

    how I can make my DLL work with the DPC++ compiler?

     

  12. Thanks Rolf for the explanation (i still need to digest all).

    14 hours ago, dadreamer said:

    Still I don't get the grand design of these experiments. Maybe I should wait a little. 😀

    The best way to acquire experience is to experiment ! Read some of you, we feel we are manipulating a nuclear plant 😆 --> worst case labview crash (we'll survive really !!)

    As I am working on a project where I need time performance on array operations with CPU (read, calculate, write) ; 

    Good news - The arrays are fixed size  (no pointer realocation and no resizing)
    Bad news - The array can be 1D, 2D, 3D, 4D. (the access times with the LabVIEW native function palette are not satisfactory for our application --> need to find a better solution)

    By analogy, we suppose that the access to an array is limited on a PC as with an FPGA (on this one the physical limitation of access to the data of an array in read/write is 2 ports of reading and writing by cycle of clock whatever the size of the array).

    There is also the O(N) rule which says that the access time (read/write) to an array data is proportional to its size N --> I maybe wrong here

    In any case to increase the access time (read/write) of an array, a simple solution is to organize our data by ourselves (an array is split in several arrays (pointers) to multiply the access speed --> O(N) becomes in theory O(N/n) ) and port are multiplied by n (access time)

    We navigate in this "array" by addressing the right part (the right pointer).

    Some will say to me, but why you do not divide your table in labview and basta ? --> simply that navigating with pointers avoids unnecessary data copies on all levels and therefore makes us lose process time. 
    We tested it, we saw a noticeable difference!

    In theory, doing like this is much more complex to manage but has the advantage of being faster for the reading / writing of data which are in fact the main problem

     

    1915049504_2Darray.png.b6179d1fa965eebfe3b9af7f2f727238.png

     

    Now why am I having fun with C/C++?
    Simply in case we can't go fast enough on some operations, in this case we transfer the data via pointers (as i told pointer well managed is the best solution - no copy ), we use C/C++ libraries like "boost" which are optimized for some operation.

    Moveblock is a very interesting functionnality ! 
    So the next step is to code and test 3D,4D array and be able with only PTR primary adress to navigate very fast inside arrays (recode replace, recode index, code construct the final array)

     

    I found some old documentations and topic speaking about memory management and it helped me much. Thank you again Rolf because i saw many time some of your post helping a lot

  13. Rolf, if i well understood you, if i do that : 

    DLL_EXPORT unsigned int* Tab1D_int_Ptr(unsigned int* ptr){
    return ptr;
    }

    Data coming from LabVIEW, it mean adress memory could be released at any time by LabVIEW ? (that's logic)

    --> method 2 is a solution in that case (Pointer created in labview with DSNewPtr + Moveblock)

     

    On 5/4/2022 at 1:15 PM, cordm said:

     

    If you also want to resize LabVIEW data structures, there are memory manager functions to do that. Pass the array by handle and use DSSetHandleSIze or NumericArrayResize.

    Examples for interfacing with DLLs are here: examples\Connectivity\Libraries and Executables\External Code (DLL) Execution.vi

     


    I have another question, for tab, what is the difference between passing by pointer or handler.

    I mean we have a a struct implicitly give to the C/C++ lenght of tab with the Handler method but is there another difference ? (Ugly structure syntax 🥵)

    (many thanks for the exemple cordm ! 👍)

     

    Image is a VI snipped 

     

    On 5/4/2022 at 2:04 PM, ShaunR said:

    Be afraid; be very afraid :D 

    ShaunR, i'm not far to do what i want 😉

     

    Pointeur.dll

    memory management.png

  14. Hello ShaunR,

    First of all, thank you for taking the time to answer me.
    In view of your answer I will recontextualize the question.

    First of all I agree that you have to initialize and delete a pointer correctly. Here it is an example which aims to understand the management of the memory under LabVIEW.

    I want to understand how it works. I don't agree with your answer forbidding to manipulate a pointer. When the subject will be mastered there is no reason to be afraid of it.

    What I want to do is to understand how LabVIEW when declaring a variable stores it in memory.
    Is it strictly like in C/C++? Lets take an exemple,  with an array of U8

    Because in this case by manipulating the pointers properly it is interesting to declare an array variable in labview then to transmit its address to a DLL (first difficulty), manipulate as it should in C/C++  then return to labview to continue the flow. 

    Why do I want to do this ?
    Because it seems (I say seems because it's probably not necessary) that LabVIEW operations are slow, to slow for my application !

    As you know we are working on the development of a DeepLearning library and this one is greedy in calculation and thus it is necessary to accelerate them with libraries of multitrading C/C++ (unless to have the equivalent in LabVIEW but I doubt it for the moment).

    Just to give you a comparatif if we content to use LabVIEW normally we are 10 time slow as python !!

    Is it possible to pipeline a loop in labview ? Is it possible to merge nested loop in LabVIEW ?

    Finally about the data transfer, I understand perfectly that in "term of security", copy a data in the DLL to use it then to restore it to LabVIEW is tempting but the worry is in the delays of data transfer. That's what we want to avoid !

    I think it's stupid to copy a data if it already exists in memory, why not use it directly ! (in condition to master the subject) The copy and transfer make us lose time.

    Can you please give me some answers ?

    Thank you very much 

     

  15. Hi everybody,

    In our quest to optimize our codes we try to interface the variables declared in LabVIEW with a C/C++ processing (DLL).

    Copy_adr_BD.png

     

    1 - In my example, we had fun declaring a U32 variable in LabVIEW, then we created a pointer in C to assign it the value we wanted (copy) then we restored the value in LabVIEW.
    In this case everything works correctly.

    Here is the code in C :

    Quote

    DLL_EXPORT unsigned int *create_copy_adress_Uint(unsigned int val)
    {
    unsigned int *adr;
    adr = (unsigned int*)malloc(10);
    *adr = val;
    return adr;
    }


    Hence my question, am I breaking my head unnecessarily, does my function set already exist in the LabVIEW DLL (I have a feeling that one of you will tell me...) 

    Copy_adr_fp.png

     

     

    Get_adress.png

    2 - During our second experiment (more interesting), we assign this time the address of the variable U32 declared in LabVIEW to our pointer, this time the idea is to act directly at the level of C on the variable declared in LabVIEW.
    We read this address, then we try to manipulate the value of this variable via the pointer in C and it does not work!

    Why ? or did I make a mistake in my reasoning ?

     

    Get_adress_fp.png

     

     

     

    This experiment aims to master the memory management of the data declared in LabVIEW at the C level. The idea would be then to do the same thing but with U32 or sgl arrays.

    3 - When I declare a variable in LabVIEW, how is it managed in memory? Is it done like in C/C++?

    4 - Last question, the moveblock function give me the value of a pointer (read), which function allows me to write to a pointed celled ?

     

    Quote

     

    DLL_EXPORT unsigned int *get_adress_Uint(unsigned int val)
    {
    unsigned int *adr;
    adr = (unsigned int*)malloc(10);
    adr = &val;
    return adr;
    }

    DLL_EXPORT unsigned int adress_to_int(unsigned int *adr)
    {
    return *adr;
    }

    DLL_EXPORT void set_int_to_adress(unsigned int *adr, unsigned int val)
    {
    *adr = val;
    }

     

     

    I put the source code as zip file 

     

     

     

    DLL pointeur.zip

  16. 11 minutes ago, dadreamer said:

    They are Code Interface Nodes. This is an obsolete and no longer supported technology, that was superseeded by Call Library Function Nodes. If you want to know more about CINs, take a look at Code Interface Reference Manual.

    You still can download C Code Generator package and install it onto LabVIEW (2017 is the latest version), but it's not actively maintained these days and I even suppose, that it was deprecated from LV 2020. If you want to dig this deeper, that thread may be useful for you.

    Thank you very much !! I will have a look it will be very usefull for us about the optimization of our execution code !

  17. On 4/15/2022 at 9:34 PM, dadreamer said:

    It was me. And not partially. 64-bit CINs do work absolutely the same way as 32-bit CINs. I successfully managed to call all the entry points and later adapted three examples from Code Interface Reference Manual to 64-bit CINs. Moreover, I managed to make LabVIEW load external subroutines, that was impossible and unsupported in versions 8.0 to 2016. But in fact all this extra knowledge gave me nothing for my real work but wasted time and efforts. I stopped experimenting with CINs in around 2015 or so and never really came back to this legacy tech.

     

    Sorry guys if i feel to be a newbie but what is a "CINs" ?

    Another question : I remember during 2011 to see that LabVIEW had a C code generator. Do you know why this option is no more available ?

     

  18. ok thanks guys for all of these feedback !

     

    It's for our HAIBAL project we will soon start optmisation of our code and i'm exploring différent possibilities.

    We continu the work of Hugo  

    Always on our famous stride ! And our dream now is to finish on xilinx platform fpga.

    I want to prove that we can be efficient in calculation also with LabVIEW. (Maybe we will have to precompile as DLL a numerous part of our code to make it more efficient)

     

    image.png.5923e8180482738a41e742fbeeefb46a.png

  19. Hello LabVIEW community,

     

    Is there a documentation about LabVIEW DLL functions. 

    We would like to have a global view at our team in order to explore possibilities of functions (Maybe another topic already spoke about).

    Thanks for your help

    LabVIEW1.png

    LabVIEW2.png

  20. The HAIBAL toolbox will propose to the user to make his own architecture / training / prediction natively on LabVIEW.
    Of course we will propose natively numerous exemple like Yolo, Minst, VGG ... (that user can directly use and modify it)

    As our toolkit is new, we made the choice to be also fully compatible with Keras. This means that if you already have a model trained on Keras, it will be possible to import it on HAIBAL. This will also open our library to thousands of models available on the internet. (all import will traduct on HAIBAL native labVIEW code as editable by users)

    LV_Logo_PowerdBy_centered-wide.png


    In this case, you will have two choices;
    1 - use it on labVIEW (predict / train)

    2 - Generate all the native architecture equivalent to HAIBAL (as you can see on the video) in order to modify it as you wish.

    HAIBAL it's more 3000 VIs, it represente a huge work and we not yet finished. We hope to release the first version this summer (with Cuda) and hope NI-FPGA optimisation to speed up inference. (Open CL and all Xilinx FPGA compatibilities will also come during 2022/2023)

    We are building actually our website and our youtube channel.


    The teams will propose tutorials (youtube/git hub) and news (website) to give visibilities for users

     

    In this video we import Keras VGG-16 model saved in HDF5 format to HAIBAL LabVIEW deep learning library. Then we can generate with our scripting the graph to allow user modify any architecture for his purpose before running it.

  21. Thank you very much for your encouragement. 😀
    Yes, we can confirm that it took a lot of work and that your encouragement pushes us to do more !

    We also thank you for your remarks of improvements (we are interested!! the objective of HAIBAL is to make a user friendly library) 

    I (Youssef Menjour) have always liked LabVIEW and artificial intelligence and it was frustrating not to have an efficient tool at our disposal.

    We will start sharing more and more examples in the next few days.🧑‍🎓
    We will also soon propose a free library to pilot a drone easily affordable on amazon because there will be with HAIBAL an example of autopilot assisted by AI of this drone (and a complete Tutorial on youtube).

    We are also thinking about doing the same with the "mini sheetah" type robot.

    In short, it will move in the next weeks, we still have a lot of work and once again your encouragement makes us really happy. 

    LabVIEW without AI is a thing of the past. 💪

     

     

     

    This exemple is a template state machine using HAIBAL library.

    It show a signal (here it's sinc) and the neural network during his training has to learn to predict this signal  (here we choose 50 neurones by layers, 10 layers, layer choose is dense).

    This template will be proposed as basic example to understood how we initialize, train and use neural network model.  This kind of "visualisation exemple" is inspired from https://playground.tensorflow.org/ help who want to start to learn deep learning.

     

    • Like 1
  22.  Dear Community,

     TDF is proud to announced the coming soon release of HAIBAL library to do Deep Learning on LabVIEW.

    The HAIBAL project is structured in the same way as Keras.

    The project consists of more than 3000 VIs including, all is coded in LabVIEW native:😱😱😱

    • 16 activations (ELU, Exponential, GELU, HardSigmoid, LeakyReLU, Linear, PReLU, ReLU, SELU, Sigmoid, SoftMax, SoftPlus, SoftSign, Swish, TanH, ThresholdedReLU), nonlinear mathematical function generally placed after each layer having weights.
    • 84 functional layers/layers (Dense, Conv, MaxPool, RNN, Dropout, etc…).
    • 14 loss functions (BinaryCrossentropy, BinaryCrossentropyWithLogits, Crossentropy, CrossentropyWithLogits, Hinge, Huber, KLDivergence, LogCosH, MeanAbsoluteError, MeanAbsolutePercentage, MeanSquare, MeanSquareLog, Poisson, SquaredHinge), function evaluating the prediction in relation to the target.
    • 15 initialization functions (Constant, GlorotNormal, GlorotUniform, HeNormal, HeUniform, Identity, LeCunNormal, LeCunUniform, Ones, Orthogonal, RandomNormal, Random,Uniform, TruncatedNormal, VarianceScaling, Zeros), function initializing the weights.
    • 7 Optimizers (Adagrad, Adam, Inertia, Nadam, Nesterov, RMSProp, SGD), function to update the weights.

     

     Currently, we are working on the full integration of Keras in compatibility HDF5 file and will start soon same job for PyTorch. (we are able to load model from and will able to save model to in the future – this part is important for us).

    Well obviously, Cuda is already working if you use Nvidia board and NI FPGA board will also be – not done yet.

    We also working on the full integration on all Xilinx Alveo system for acceleration.

    User will be able to do all the models he wants to do; the only limitation will be his hardware. (we will offer the same liberty as Keras or Pytorch) and in the future our company could propose Harware (Linux server with Xilinx Alveo card for exemple --> https://www.xilinx.com/products/boards-and-kits/alveo.html All full compatible Haibal !!!)

     

    About the project communication: 

    The website will be completely redone, a Youtube channel will be set up with many tutorials and a set of known examples will be offered within the library (Yolo, Mnist, etc.).

    For now, we didn’t define release date, but we thought in the next July (it’s not official – we do our best to finish our product but as we are a small passionate team (we are 3 working on it) we do our best to release it soon).

     

    This work is titanic and believe me it makes us happy that you encourage us in it. (it boosts us). In short, we are doing our best to release this library as soon as possible.

    Still a little patience …

     

    Youtube Video : 

    This exemple is a template state machine using HAIBAL library.

    It show a signal (here it's Cos) and the neural network during his training has to learn to predict this signal  (here we choose 40 neurones by layers, 5 layers, layer choose is dense).

    This template will be proposed as basic example to understood how we initialize, train and use neural network model.  This kind of "visualisation exemple" is inspired from https://playground.tensorflow.org/ help who want to start to learn deep learning.

     

     

     

    4.png

     

     

    1.png

    • Thanks 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.