X-Plane, UDP, and Visual Basic, for X-Plane Version 9

 

Contents
email:webmaster@jefflewis.net

Introduction
If you don't know what any part of this page's title means, you might not be in the right place. But don't worry, I'll explain it all. X-Plane is a popular computer flight simulator. It has a very accurate flight model, making it very powerful not only for entertainment, but also as an engineering tool. It comes packaged with additional programs that allow you to design your own aircraft and fly it in X-Plane. Besides its accurate flight model, X-Plane has another feature which makes it very powerful- it outputs flight data over a network, and allows certain parameters, such as control positions, to be sent back to it (and if you're using version 8.6 or newer, you only need one computer to do this). The protocol that X-Plane uses to send/receive the data is UDP, hence the UDP in the title above. As for the Visual Basic, well, to be able to make use of the data you have to have a program to do something with it, and Visual Basic is the programming language I use. If you still don't understand any of that, go check out the X-Plane website. From there, you can download a demo version of the simulator, and see what makes it so great.

This tutorial has been updated using X-Plane 9.30 beta 11. Previous versions were made using X-Plane 6.51 and X-Plane 8. The format of the X-Plane UDP packets was changed somewhere in between there, so I don't know if either will work for X-Plane 7. If you need one of the previous tutorials, please refer to the X-Plane UDP Index to check for that version.

There's already a pretty good site for info on UDP and X-Plane, although it's a little outdated. It is http://www.x-plane.info. Also, decent documentation for UDP now comes included with X-Plane (it didn't when I wrote the first tutorial). I'm not going to try and repeat everything from those two sources. The reason I'm writing this page, well actually two reasons, are because I didn't see anything on those sources that dealt specifically with Visual Basic (it was mostly C), and because they assumed you already had some network programming experience. Well, when I first started trying to play around with this stuff, I'd never programmed anything that dealt with a network, so I had to figure it all out by scouring web sites looking for the relevant information. Hopefully, by me putting all this info in one spot, someone else in the same situation that I was will read this and save some time. By the way, I'm assuming that you at least know how to program in Visual Basic.

The easiest way to find a listing of all the UDP data channels is just to look at the Data Input & Output screen in X-Plane. To figure out what gets sent in each of those channels, just temporarily select them for output to the Cockpit During Flight.

One more thing before we get started. You can download some source code to see all of this in context in a program while you're reading. You can also use this source code as your foundation for writing your own applications.
Download source code

 

WinSock
To start off, you'll need to add two WinSock components to your program. WinSock stands for Windows Sockets. It is the interface between programs and the network, and has built-in suppport in Visual Basic. First, go to the Project menu and select Components. In that window, scroll down until you get to Microsoft Winsock Control 6.0, and click the check box. Then click OK. A little icon will appear on your tool bar that looks like two computers connected together. Click it, and then add two of them anywhere on your form.

One Winsock control will be used for receiving data from X-Plane. The other will be used for transmitting data to X-Plane. This is actually a change from older X-Plane versions. Previously, you could get away with one winsock control, but X-Plane now communicates over the network slightly differently.

There are four parameters that we have to set on each Winsock control- RemoteHost, RemotePort, Bind (the local port), and most importantly- the protocol. First, set the protocol on each Winsock component to 1 (UDP). The other protocol (0) is TCP, which we won't use.

As far as ports - if the analogy of an IP address is a street address, then a port is an apartment number. When a program binds a port, it can receive information on that port and transmit data from that port, and no other program on that computer can use it. X-Plane actually binds two ports - 49000 and 49001. It uses 49000 only to receive data, and 49001 only to send data (this is the difference from previous versions - it used to use 49000 to transmit and receive). These are hard coded into X-Plane, and there's no option to change them.

One aspect of Winsock in VB (apparently it's a feature and not a bug), is that once a Winsock control processes an incoming data packet coming from the sender's specific port and IP, that particular Winsock control will automatically configure itself to send data back to that specific port and IP, even if it was originally set up to send to a different port and IP. So, since X-Plane will be sending from port 49001, our Winsock control that receives that data will try to respond to port 49001. The problem is, X-Plane isn't listening on that port, so that's why we need a separate Winsock control to transmit data to X-Plane. So, on our Winsock control that's going to receive data, set the remote port to 49001, and on the one that's going to send data, set the remote port to 49000.

Now, for binding the local port, there are some options. In X-Plane, when setting the IP address and port of the machine you want to send data to, X-Plane defaults to port 49000 (it's expecting to send the data to another copy of X-Plane). However, for our application, there's nothing saying that it has to be 49000. In fact, if you're going to run the VB program on the same computer as X-Plane, you have to pick a different port since X-Plane will have already bound 49000. I use 49002 for the receiver Winsock (that's the port that has to be set in X-Plane's data output options) and 49003 for the transmitter Winsock, but you could really set it to whatever you wanted. If you're going to run your VB program on a separate machine from X-Plane, you might consider setting your receiver Winsock to bind 49000, to give you one less option to set in X-Plane.

The last thing to set up on the Winsock components is the RemoteHost IP address, and relatedly, the IP address of your data receiver in the X-Plane options. If you're running your VB program and X-Plane on separate computers, just set the IP addresses accordingly. If you're running them both on the same computer, you can't simply type in your own IP address into the X-Plane options (for example, 192.168.0.2). X-Plane won't send the data that way. You have to set the options in X-Plane to send the data to 127.0.0.1 (the standard localhost IP address). The added advantage is that 127.0.0.1 doesn't actually use the network. Visual Basic can handle using either your actual IP address or the localhost IP address, but it's probably best to use the localhost to avoid all of the network hardware.

For our purposes, there are one event and two methods associated with Winsock that we need to be concerned with. The event is _DataArrival. This occurs whenever a UDP packet is received at the port specified. Since I named my receiver winsock control WinsockUDP_Rx, the subroutine for this event is:

     Sub WinsockUDP_Rx_DataArrival(ByVal bytesTotal As Long)

     End Sub

Now, whenever a packet of data is received by our program, this subroutine will be run. The methods that we need to be concerned with are .GetData and .SendData. Their purposes are pretty much self explanatory. They're very easy to use. When you receive data, you need to store it to a variable, so use code that looks about like:

     WinsockUDP_Rx.GetData VariableName

If you want to send a variable, use code that looks about like this (keeping in mind that I named my transmitte Winsock component WinsockUDP_Tx):

     WinsockUDP_Tx.SendData VariableName

Just a small note now before I get into the explanation of UDP packets- I declare the variable that I'm going to use to store the data as a byte array. I also do this for the variable that I use to store the data that I'm going to send. I do this because UDP packets can only send bytes. But I'm starting to get ahead of myself. Lets start looking at UDP packets.

 

UDP
This is the section of the tutorial that I think will be the most help, because it's the section that I had the most trouble with, myself. UDP stands for Universal Data Packet. It is a method of sending information over a network. We don't really need to be concerned with all the details of how it works, but there are a few things that we need to know. First, unlike other protocols, like TCP, UDP does not do any error checking. If we send a packet, and the other computer gets the wrong data, or doesn't even get any data at all, we have no way of knowing, unless we program our own error checking into the program. But since we can't really change the code of X-Plane, we're kind of stuck on that. This really shouldn't be much of a problem, especially if you're going to be transmitting over a local network, but it could be the cause of a hard to track down problem.

The next important thing to know is that UDP packets are composed entirely of bytes, as are all packets sent over networks, and even everything stored on your hard drive. It has to be this way- computers only work with ones and zeros. To get decimals, you have to do a bit of math on the bytes that you've stored. To get letters and other symbols, you have to know the ASCII code for that symbol, to translate the byte into the symbol. X-Plane uses what are known as single precision floating point variables for just about everything sent over the network. This means that the number can be stored using four bytes. A double precision floating point variable would require eight bytes. So let's take a look at how to convert those four bytes into a number.

Single Precision Floating Point Numbers and Bytes
Let me say a couple things before I get into the details of floating point numbers on computers, which will hopefully make it seem a little simpler. First, a floating point number on a computer is basically just the binary equivalent of scientific notation in decimal numbers. This lets the computer maintain significant digits, while at the same time being able to represent really big and really small numbers. So, it's calculated as:

     Value = Significand * 2^Exponent

Second, when I first figured out how to do this, and wrote my algorithms to do these conversions, I didn't know enough about certain aspects of programming to go about it in the most efficient manner. In fact, I still don't really know enough personally, but someone who doessent me in a suggestion for a more efficient algorithm, which I've included at the very end of this page. I'm leaving my code in, because even if you never us it, it still helps explain the theory of how floating point numbers work. Really, I guess you don't need to know the theory, but it's still nice. The reason for putting the more efficient algorithm at the end of the page is, I've included the entire algorithm, which is rather long. I thought putting it at the end would leave the rest of this tutorial more readable. (It should be noted that Passel's algorithm is much more efficient. when I ran my code and his code in a loop that repeated 10,000 times - which still didn't take long to run - his algorithm was about 30 times faster than mine.) Now, on to the details.

A single precision floating point number is stored as 4 bytes. Let's just use this as an example:

     66 246 64 0

At this point, something important to bring up is whether the bytes are being stored in big endian or little endian format. Basically, that's the convention used for the order of listing bytes. As an example from everyday life, when we right 524, we assume it to mean (5 x 10^2) + (2 x 10^1) + (4 x 10^0). That's known as big endian, because the largest values are listed first. It's the convention that we use, but if the group of people that invented our number system had done things a little differently, that same value could just as easily have been written as 425, to mean (4 x 10^0) + (2 x 10^1) + (5 x 10^2), which would be little endian. For our case of floating point values, you need to know the proper order to analyze those bytes. Basically, Macs use big endian, where you can just use the bytes the way they are. PCs use little endian, and you need to look at the bytes in reverse order. (In X-Plane 8, X-Plane's developer standardized how UDP packets were sent so that they were all in big endian order, but in 9 it's back to being platform specific).

If you received a data packet from a PC, your bytes would be in the following order:

     0 64 246 66

The first step in this case would be to reverse the order of the bytes, and then the rest of the calculation is identical

     66 246 64 0

So, starting with four bytes in the proper order, we first need to convert the variables to binary. Always use an eight digit binary number. Use leading zeros if you have to. If you're unsure how to do this, here's the code I used:

     RunningTotal = byte1
     For ctr = 1 To 8
       If RunningTotal >= 2 ^ (8 - ctr) Then
       NumberString1 = NumberString1 + "1"
         RunningTotal = RunningTotal - 2 ^ (8 - ctr)
       Else
         NumberString1 = NumberString1 + "0"
       End If
     Next ctr

So, after converting each of the four bytes to binary, we get:

     01000010 11110110 01000000 00000000

From this list of digits, we need to get three numbers. So first, combine all the digits into one long list.

     01000010111101100100000000000000

Now we need to redo where the breaks are in the same way that the computer will. Do it like so- 1 digit, 8 digits, 23 digits. The first bit is the sign bit, the next 8 bits are the biased exponent, and the remaining 23 bits are the mantissa.

     0 10000101 11101100100000000000000

The first digit is the sign digit. It tells us whether the number is positive or negative (0 for positive, 1 for negative). Since it is a zero here, our number is positive.

The next 8 bits are the biased exponent, biased because it's 127 more than the actual exponent. This is done because there's no way to represent a negative number with just ones and zeroes. So, convert the binary to decimal by multiplying by powers of two. Start with the right-most bit, and work your way to the left.

     Biased Exponent = (right-most bit * 2^0) + (next bit * 2^1) + ... + (left-most bit * 2^7)

Once you've converted the binary into decimal, simply subtract 127 to get the actual exponent.

     Biased Exponent = 133
     Exponent = Biased Exponent - 127 = 6

Now there're those last 23 digits. These are called the mantissa, or the fractional part of the significand. Basically, it's still a number in binary, only they're the digits that come after a decimal point (binary point?). Consider that in our normal decimal base that you had 3.1415926. The mantissa would be the 1415926. So, the conversion of the binary mantissa back to decimal is the same, only now you're using negative powers of two.

Whenever this standard was created, it was decided that since the significand would always be greater than 1, there was no need to waste a bit by encoding that, so that's why only the mantissa is encoded. So, when you calculate the value of the significand, you assume that one's already there. So to convert the above series of digits into the number that we're going to use, do the following:

     Significand = Implied 1 + Mantissa
     Significand = Implied 1 + (first bit * 2^-1) + (2nd bit * 2^-2) + (3rd bit * 2^-3) + ... + (23rd bit * 2^-23)

Here it is for our specific example, where the numbers in bold are the digits in the binary sequence:

     Significand = 1 + (1 * 2^-1) + (1 * 2^-2) + (1 * 2^-3) + (0 * 2^-4) + ... + (0 * 2^-23)
     Significand = 1.923828125

Now that we have the sign, the exponent, and the significand, we're ready to calculate the value. Remember that it is of the following form, and that we have to make it positive or negative depending on the sign:

     Value = Significand * 2^Exponent
     Value = 1.923828125 * 2^6
     Value = 123.125

There, we've just calculated a value from four bytes.

Going from Single Precision Floating Point Numbers to Bytes
To calculate the four bytes that represent a given single precision floating point value, you basically just have to go through the above calculations in reverse. However, it's a little more involved. Let's go through an example again to explain it. To make things more interesting, let's use 0.1 as the number we're going to convert. As a note, remember that the method for big endian or little endian is the same up until the last step.

The first step is to find the exponent. This is done by finding the log base 2 of the absolute value of the number, and then rounding down to the nearest integer (9.9 rounds down to 9, -3.2 rounds down to -4). Remember that Log in visual basic is the natural log (base e), so think back to your high school algebra days to remember how to find the log of a number in any base. To save a little computational time, I defined Log2 as a constant equal to 0.69314718056 (the natural log of 2). Here's the code I use to determine the exponent:

     Exponent = Int(Log(Abs(float)) / Log2)
     BiasedExponent = Exponent + 127

For our example with the float equal to 0.1, the exponent is -4, so the biased exponent is 123, or 01111011 in binary.

Next, you need to find the Mantissa. This is pretty similar to the way that you convert an integer from decimal to binary. You just go through checking each digit to see if it makes the value less than or greater than the value you're trying to approximate. Here's the code that I use. Remember that the 1 before the decimal point is implied for the significand. I should also add that what I refer to as Mantissa and MantissaTemp below are actually the significand, and only MantissaString refers to the actual mantissa (I would modify that here, but I'm leaving it as is to keep it consistent with the included sample code - once I get around to modifying the sample code, I will correct this).

     Mantissa = 1
     MantissaString = ""
     absfloat = Abs(float)
     For ctr = 1 To 23
       MantissaTemp = Mantissa + 2 ^ -ctr
       If MantissaTemp * 2 ^ Exponent <= absfloat Then
         MantissaString = MantissaString + "1"
         Mantissa = MantissaTemp
       Else
         MantissaString = MantissaString + "0"
       End If
     Next ctr

In our example with 0.1, the significand ends up being 1.60000002384186. More interestingly, in binary it's represented as 1.10011001100110011001101. Taking the last 23 digits behind the decimal point (the 23 that the code above determines), our mantissa is 10011001100110011001101.

Finally, our number is positive, so the leading bit will be 0. So we can represent this number in a single precision floating point as

     0 01111011 10011001100110011001101

Breaking it up into 4 bytes

     00111101 11001100 11001100 11001101

and finally, determining their decimal values

     61 204 204 205

If you want the values in big endian order (Macs), you're already done. For little endian, simply reverse the order of the bytes.

     205 204 204 61

As an aside, and for a bit of theory - if you notice, the mantissa is just a repeating pattern of 1100, where the last digit got rounded up. The number 0.1 cannot be represented discretely in binary, and thus cannot be exactly represented with a single precision floating point number - similar to the way 1/3 can't be represented exactly in decimal without an infinite number of digits. In fact, many numbers that we're used to representing in decimal can't be exactly represented by a single precision floating point number, but for our purposes with X-Plane, they should be close enough that you don't need to worry about it. It's only special applications where this discrepancy becomes important.

Double Precision Floating Point Numbers
Double precision floating point numbers work the same way as single precision, only they use 8 bytes instead of 4. The breakdown of the 64 bits then is 1 digit, 11 digits, 52 digits. The first bit is the sign bit, the next 11 are the biased exponent, which is biased by 1023, not 127 like a single precision, and the last 52 are the mantissa. Everything is calculated the same way as described above for single precision numbers, only you'll need to update your code appropriately for the new lengths and bias. The only application I know of for double precision with UDP and X-Plane is when sending a VEH1 packet - lattitude, longitude and altitude must be represented as double precision floating point numbers.

Integers and Bytes
Calculating an integer from four bytes is pretty similar to calculating a floating point variable. You take all four bytes, convert them into four 8-digit binary numbers, and combine them into one long 32 digit binary number. Remember to use the proper Mac/PC convention for the order that the bytes are in. Once you have the 32 digit number, the first digit controls the sign of the number. If it is a zero, the number is positive. If it is a one, the number is negative.

Calculating a positive integer is easy. Just convert the 31 digit binary number into a decimal number. However, a negative number is a bit different, because early computer engineers wanted to solve the problem of finding an easy way to do subtraction. What they came up with is called "2's complement." It's really pretty simple- just invert all the bits and add 1. So as an example, 3 would be represented in binary as:

     0 0000000 00000000 00000000 00000011

Note that this is just 11 (bin), with a whole bunch of leading zeros, and a zero in the sign bit. Negative 3 would be:

     1 1111111 11111111 11111111 11111100 + 1
     1 1111111 11111111 11111111 11111101

An X-Plane specific note: if the only integers you're going to deal with are the index numbers, then you only need to look at the last byte. And in this case, there's no need to go through the steps of converting to binary and back to decimal, since you know the first three bytes are going to be zero, and the other byte's going to be the number (again, remember to reverse the bytes for PCs).

If you're interested as to the reason why computer engineers decided to use 2's complement, here's the explanation. It's to make subtraction easier. I'll show this with an example. Say you wanted to perform the function 5-3. Well, this would be the same as 5 + -3. So, we can use 2's complement of 3, and then do normal addition. (Remember that in binary, 1 + 1 = 10.)

      0 0000000 00000000 00000000 00000101
     +1 1111111 11111111 11111111 11111101
      ------------------------------------
     10 0000000 00000000 00000000 00000010

Since the computer can only handle eight digit bytes, that leading 1 goes into overflow and gets dropped, so we're left with:

      0 0000000 00000000 00000000 00000010

This converts to 2 in decimal, so we can see that the math did come out correctly. You don't really need to know this theory for X-Plane, but it is nice to know why it's done the way it is.

Letters and Symbols and Bytes
This is a lot simpler than the conversion between a floating point number and the four bytes. Letters and symbols each correspond to an integer number. The number is between 0 and 255, so it can be represented as a single byte. Here is a list of all the symbols and their corresponding code, in MS Word format. So, to convert the letter "A" to a byte, we just look up what its code is, and find that it is 65. Note that a lower case "a" is 97, which is different from an uppercase "A". If you look at the above list, you'll also notice that each of the digits 0 through 9 has a code which is different from the digit itself. For example, a "9" is 57. This is because in this format, the digits are being represented as strings, not as numbers.

Handling UDP Packets in Visual Basic
Since all the information sent via UDP is in bytes, it makes sense to use a byte variable to handle it. And since there are a lot of bytes being sent, we should use an array. Here is the way I declared the array to handle incoming data:

     Dim PacketData() As Byte

Now, when we read the packet data in, using the code mentioned earlier in this tutorial:

     WinsockUDP_Rx.GetData PacketData

each of the bytes is stored into an element in the array PacketData. If we know something about the format of the data being sent, we can decode it into the variables that we need.

Similarly, when we're making up a data packet to send out, it's useful to define it as a byte array. Then, we convert all of our values into the proper bytes, and send out the packet:

     WinsockUDP.SendData PacketData

Breaking Down an X-Plane UDP Packet
Now, let's take a look at what a UDP packet being sent from X-Plane looks like. This is another place where I got a little lost looking at the information on x-plane.info. But once I figured out everything was in bytes, it made a lot more sense. A UDP packet contains a header with some network information, but Visual Basic does not import that into the program when we use the .GetData command. So, we only see the body part of the packet. When I talk about packets in Visual Basic, that is what I'm referring to. A typical DATA packet being sent out from X-Plane may look something like:

      68 65 84 65 38 0 0 0 37 68 151 111 166 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

So what does all this mean? The first five bytes are what X-Plane uses for its header. Each of these bytes are actually ASCII codes, so we convert each of them into a symbol. The first 4 bytes of the header tell us what type of packet it is. In this example, they're 68,65,84,65, which correspond to D,A,T,A, respectively, so we know this is a DATA packet. The fifth byte in the header, 38 in this example, is an index used internally by X-Plane that we don't really need to worry about (I'm not exactly sure what it does, to tell the truth). When creating a data packet to send back to X-Plane, I just set this value to 48, the ASCII code for "0."

Now comes a group of 36 bytes. This is the data segment. The first 4 bytes are the index, and the next 32 bytes are the data for that index. The best explanation of what each index is, and what data are sent on that index, is to simply look at the Data Input & Output screen in X-Plane. The first 4 bytes are the index, as an integer. Remember that PCs and Macs reverse the order of the bytes. So, on a PC, look at the first of the 4 bytes, and on a Mac, look at the fourth of the 4 bytes. Whatever the byte is is the index number. In our example above the byte is 37, which means index 37, which is engine rpm.

Now there are 32 bytes left in this data segment. This is 8 groups of 4, or 8 single precision floating point numbers. You convert them in the manner as described above. The first number in our example is the four bytes 68, 151, 111, 166, or 1211.489. The remaining 7 data points in this example are all zero.

A DATA packet can end there, as in the above example, or it could be followed by any number of 32 byte data segments, which you treat the same was as described above. From version 8 on, there is no special symbol to designate the end of a packet. If you want to be able to handle an arbitrary number of data segments, you'll just have to count how long the UDP packet is, and calculate the number of data segments from that (NumberChannels = (bytesTotal - 5) / 36).

 

Miscellaneous
There are a few more things that I discovered while writing my program, that you'll probably find useful. Remember that there is no error checking for UDP. That means that packets can get lost. Through experimentation sending DSEL and USEL packets (the packets used to request X-Plane to start or stop sending a specific data channel), I found that when sending four packets total, one right after the other in the code, on average only two of the packets made it through each time. However, sending the packets with the timer, with the timer set to an interval of 10 ms seemed to work just fine. My recommendation is that if you need to send a lot of DSEL or USEL information at once, or if you need to send several DATA channels at once, you combine them into one packet before being sent. This way, there is much less of a chance that the packets will be lost.

Another note: X-Plane uses the value -999 to represent no data. So, if you want to update only one value in a data channel, specify the other values as -999, and X-Plane will leave them alone.

And finally, if you try to update the joystick or a few other related channels, X-Plane will think you want complete control of the joystick, and will stop looking at data from the actual joystick. If you want your program to control the airplane while still letting you use the joystick, have it control the trim settings. If you do send a packet that overrides the joystick, and want X-Plane to start looking at the joystick again, send a data packet with -999 in the appropriate channels.

 

Conclusion
Well, I think that should be a pretty good starting point. I know it took me a while to figure all of the above out. But using this page, the source code of the program I wrote, and the information at www.x-plane.info, you should be able to figure out how to write your own programs to interact with X-Plane via UDP. If you have any questions, e-mail me. Good luck.

 

A More Efficient Method of Converting Between Byte Arrays and Floating Point Values
As promised, here is the more efficient algorithm concerning bytes and floating point values. This was sent in to me by someone going by Passel. I've simply copied and pasted his e-mail. I haven't had the time to try this, yet, but I hope it works.

I thought I should write to let you know that your byte array to Floats (and vice versa) routines can be much more efficient. The bytes in the array are already in floating point format so we don't have to decode all the bits of the floating point format in order to combine the bytes or split the float into bytes. We just need to copy the bytes into the destination memory in the proper order.

In the case of the later X-plane versions always exporting in Big-Endian order, and since VB is strictly Intel based and thus Little-Endian, we really only need to always swap the bytes. It is Ironic that Big-Endian was settled on, and the Mac's are now going to Intel processors. Perhaps in the future, X-plane will reverse the interface again to favor the newer platforms.

In any case, I've rewritten your routines to give you an example of just swapping the bytes into the proper order and copying them to the destination, which is tremendously faster than all the string manipulations.

[This is where the algorithm was originally included in the e-mail. I've copied it below, to get it out of the blockquote section.]

passel
http://www.xtremevbtalk.com/

Option Explicit
Public Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" ( _
                        Destination As Any, _
                             Source As Any, _
                        ByVal Length As Long)

Sub ConvertBytesToSingle( _
  byte1 As Byte, _
  byte2 As Byte, _
  byte3 As Byte, _
  byte4 As Byte, _
  float As Single _
)

  Dim b(1 To 4) As Byte

  If FormMain.CheckReverseBytes.Value = Checked Then
    b(1) = byte1
    b(2) = byte2
    b(3) = byte3
    b(4) = byte4
  Else
    b(1) = byte4
    b(2) = byte3
    b(3) = byte2
    b(4) = byte1
  End If
  CopyMemory float, b(1), 4
End Sub

Sub ConvertSingleToBytes( _
  float As Single, _
  byte1 As Byte, _
  byte2 As Byte, _
  byte3 As Byte, _
  byte4 As Byte _
) 'This sub converts a value to four bytes for storage as a single precision floating point value

  Dim b(1 To 4) As Byte
  CopyMemory b(1), float, 4
  If FormMain.CheckReverseBytes.Value = Checked Then
    byte1 = b(1)
    byte2 = b(2)
    byte3 = b(3)
    byte4 = b(4)
  Else
    byte1 = b(4)
    byte2 = b(3)
    byte3 = b(2)
    byte4 = b(1)
  End If
End Sub

Sub ConvertBytesToDouble( _
  byte1 As Byte, _
  byte2 As Byte, _
  byte3 As Byte, _
  byte4 As Byte, _
  byte5 As Byte, _
  byte6 As Byte, _
  byte7 As Byte, _
  byte8 As Byte, _
  float As Double _
) 'This sub converts eight bytes to a Double precision floating point value

  Dim b(1 To 8) As Byte

  If FormMain.CheckReverseBytes.Value = Checked Then
    b(1) = byte1
    b(2) = byte2
    b(3) = byte3
    b(4) = byte4
    b(5) = byte5
    b(6) = byte6
    b(7) = byte7
  b(8) = byte8
  Else
    b(1) = byte8
    b(2) = byte7
    b(3) = byte6
    b(4) = byte5
    b(5) = byte4
    b(6) = byte3
    b(7) = byte2
  b(8) = byte1
  End If
  CopyMemory float, b(1), 8
End Sub

Sub ConvertDoubleToBytes( _
  float As Double, _
  byte1 As Byte, _
  byte2 As Byte, _
  byte3 As Byte, _
  byte4 As Byte, _
  byte5 As Byte, _
  byte6 As Byte, _
  byte7 As Byte, _
  byte8 As Byte _
) 'This sub converts a number to eight bytes for storage as a double precision floating point value

  Dim b(1 To 8) As Byte
  
  CopyMemory b(1), float, 8
  If FormMain.CheckReverseBytes.Value = Checked Then
    byte1 = b(1)
    byte2 = b(2)
    byte3 = b(3)
    byte4 = b(4)
    byte5 = b(5)
    byte6 = b(6)
    byte7 = b(7)
    byte8 = b(8)
  Else
    byte1 = b(8)
    byte2 = b(7)
    byte3 = b(6)
    byte4 = b(5)
    byte5 = b(4)
    byte6 = b(3)
    byte7 = b(2)
    byte8 = b(1)
  End If
End Sub