captainjin Posted September 10, 2008 Report Share Posted September 10, 2008 The "bitshift" is used to perform a bitwise shift on the input elements. When the input is 0, the results always return the value of 2^n, no matter what the shift size n is. e.g., bitshift(0, 4)=16 (10000) bitshift(0, 8)=256 (100000000) the behavior is same as the result for 1: bitshift(1, 4)=16 (10000) Quote Link to comment
JesseA Posted September 10, 2008 Report Share Posted September 10, 2008 Hi captainjin, According to the MathScript help, the first input to bitshift must be a scalar, vector, or matrix of positive integers, so the results when you input zero are technically undefined. But this seems like a silly limitation, so we've reported this to LabVIEW R&D as a bug (CAR# 53118). Thanks, JesseA LabVIEW R&D Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.