Imperial Cleaning

Quality of Service Networking

And then we create their relationship with each other.

Running the AWS CloudFormation Template

Installation, Storage, and Compute with Windows Server 2016

By separating reply addresses into an envelope we make it possible to write general purpose intermediaries such as APIs and proxies that create, read, and remove addresses no matter what the message payload or structure is.

In the request-reply pattern, the envelope holds the return address for replies. It is how a ZeroMQ network with no state can create round-trip request-reply dialogs.

But for most of the interesting request-reply patterns, you'll want to understand envelopes and particularly ROUTER sockets. We'll work through this step-by-step. A request-reply exchange consists of a request message, and an eventual reply message. In the simple request-reply pattern, there's one reply for each request. In more advanced patterns, requests and replies can flow asynchronously.

However, the reply envelope always works the same way. The ZeroMQ reply envelope formally consists of zero or more reply addresses, followed by an empty frame the envelope delimiter , followed by the message body zero or more frames. The envelope is created by multiple sockets working together in a chain. We'll break this down. We'll start by sending "Hello" through a REQ socket. The REQ socket creates the simplest possible reply envelope, which has no addresses, just an empty delimiter frame and the message frame containing the "Hello" string.

This is a two-frame message. The REP socket does the matching work: Thus our original Hello World example used request-reply envelopes internally, but the application never saw them.

If you spy on the network data flowing between hwclient and hwserver , this is what you'll see: This is the extended request-reply pattern we already saw in Chapter 2 - Sockets and Patterns. We can, in fact, insert any number of proxy steps. The mechanics are the same. The way it tells the caller is to stick the connection identity in front of each message received.

An identity, sometimes called an address , is just a binary string with no meaning except "this is a unique handle to the connection". Messages received are fair-queued from among all connected peers. As a historical note, ZeroMQ v2. There's some impact on network performance, but only when you use multiple proxy hops, which is rare.

Mostly the change was to simplify building libzmq by removing the dependency on a UUID library. Identities are a difficult concept to understand, but it's essential if you want to become a ZeroMQ expert.

The core of the proxy loop is "read from one socket, write to the other", so we literally send these three frames out on the DEALER socket.

The REP socket does as before, strips off the whole envelope including the new reply address, and once again delivers the "Hello" to the caller. Incidentally the REP socket can only deal with one request-reply exchange at a time, which is why if you try to read multiple requests or send multiple replies without sticking to a strict recv-send cycle, it gives an error.

You should now be able to visualize the return path. If it finds that, it then pumps the next two frames out onto the wire. The REQ socket picks this message up, and checks that the first frame is the empty delimiter, which it is. The REQ socket discards that frame and passes "World" to the calling application, which prints it out to the amazement of the younger us looking at ZeroMQ for the first time. To be honest, the use cases for strict request-reply or extended request-reply are somewhat limited.

For one thing, there's no easy way to recover from common failures like the server crashing due to buggy application code. However once you grasp the way these four sockets deal with envelopes, and how they talk to each other, you can do very useful things.

Now let's express this another way:. They don't know anything about the empty delimiter. All they care about is that one identity frame that lets them figure out which connection to send a message to. We have four request-reply sockets, each with a certain behavior. We've seen how they connect in simple and extended request-reply patterns.

But these sockets are building blocks that you can use to solve many problems. Here are some tips for remembering the semantics. It's not always going to be this simple, but it is a clean and memorable place to start. This gives us an asynchronous client that can talk to multiple REP servers.

So, to send a message, we:. This gives us an asynchronous server that can talk to multiple REQ clients at the same time. We saw this in the Chapter 2 - Sockets and Patterns mtserver example. In the first case, the ROUTER simply reads all frames, including the artificial identity frame, and passes them on blindly.

It gives us asynchronous clients talking to asynchronous servers, where both sides have full control over the message formats. It depends on whether you actually need to send replies or not. The cost is that you have to manage the reply envelopes yourself, and get them right, or nothing at all will work.

We'll see a worked example later. This sounds perfect for N-to-N connections, but it's the most difficult combination to use. You should avoid it until you are well advanced with ZeroMQ. Mostly, trying to connect clients to clients, or servers to servers is a bad idea and won't work. However, rather than give general vague warnings, I'll explain in detail:. The common thread in this valid versus invalid breakdown is that a ZeroMQ socket connection is always biased towards one peer that binds to an endpoint, and another that connects to that.

Further, that which side binds and which side connects is not arbitrary, but follows natural patterns. The side which we expect to "be there" binds: The side that "comes and goes" connects: Remembering this will help you design better ZeroMQ architectures. We've already seen how they work by routing individual messages to specific connections. I'll explain in more detail how we identify those connections, and what a ROUTER socket does when it can't send a message.

More broadly, identities are used as addresses in the reply envelope. Independently, a peer can have an address that is physical a network endpoint like "tcp: An application that uses a ROUTER socket to talk to specific peers can convert a logical address to an identity if it has built the necessary hash table.

Because ROUTER sockets only announce the identity of a connection to a specific peer when that peer sends a message, you can only really reply to a message, not spontaneously talk to a peer. It works as follows:. It's an attitude that makes sense in working code, but it makes debugging hard.

The "send identity as first frame" approach is tricky enough that we often get this wrong when we're learning, and the ROUTER's stony silence when we mess up isn't very constructive. Now let's look at some code. These two examples follow the same logic, which is a load balancing pattern. This pattern is our first exposure to using the ROUTER socket for deliberate routing, rather than simply acting as a reply channel.

The load balancing pattern is very common and we'll see it several times in this book. It's the post office analogy. If you have one queue per counter, and you have some people buying stamps a fast, simple transaction , and some people opening new accounts a very slow transaction , then you will find stamp buyers getting unfairly stuck in queues.

Just as in a post office, if your messaging architecture is unfair, people will get annoyed. The solution in the post office is to create a single queue so that even if one or two counters get stuck with slow work, other counters will continue to serve clients on a first-come, first-serve basis. If you arrive in any major US airport, you'll find long queues of people waiting at immigration.

The border patrol officials will send people in advance to queue up at each counter, rather than using a single queue. Having people walk fifty yards in advance saves a minute or two per passenger. And because every passport check takes roughly the same time, it's more or less fair. This is a recurring theme with ZeroMQ: The airport isn't the post office and one size fits no one, really well.

The broker has to know when the worker is ready, and keep a list of workers so that it can take the least recently used worker each time. The solution is really simple, in fact: The broker reads these messages one-by-one. Each time it reads a message, it is from the last used worker. It's a twist on request-reply because the task is sent with the reply, and any response for the task is sent as a new request.

The following code examples should make it clearer. The example runs for five seconds and then each worker prints how many tasks they handled. If the routing worked, we'd expect a fair distribution of work:. To talk to the workers in this example, we have to create a REQ-friendly envelope consisting of an identity plus an empty envelope delimiter frame.

The synchronous versus asynchronous behavior has no effect on our example because we're doing strict request-reply. It is more relevant when we address recovering from failures, which we'll come to in Chapter 4 - Reliable Request-Reply Patterns.

The code is almost identical except that the worker uses a DEALER socket, and reads and writes that empty frame before the data frame. However, remember the reason for that empty delimiter frame: If we never need to pass the message along to a REP socket, we can simply drop the empty delimiter frame at both sides, which makes things simpler.

The previous example is half-complete. It can manage a set of workers with dummy requests and replies, but it has no way to talk to clients. If we add a second frontend ROUTER socket that accepts client requests, and turn our example into a proxy that can switch messages from frontend to backend, we get a useful and reusable tiny load balancing message broker. The difficult part of this program is a the envelopes that each socket reads and writes, and b the load balancing algorithm.

We'll take these in turn, starting with the message envelope formats. Let's walk through a full request-reply chain from client to worker and back. In this code we set the identity of client and worker sockets to make it easier to trace the message frames.

The client application sends a single frame containing "Hello". The broker sends this to the worker, prefixed by the address of the chosen worker, plus an additional empty part to keep the REQ at the other end happy. Then the REQ socket in the worker removes the empty part, and provides the rest to the worker application. The worker has to save the envelope which is all the parts up to and including the empty message frame and then it can do what's needed with the data part.

On the return path, the messages are the same as when they come in, i. Now let's look at the load balancing algorithm. It requires that both clients and workers use REQ sockets, and that workers correctly store and replay the envelope on messages they get.

You should now see that you can reuse and extend the load balancing algorithm with variations based on the information the worker provides in its initial "ready" message. For example, workers might start up and do a performance self test, then tell the broker how fast they are. The broker can then choose the fastest available worker rather than the oldest.

There's a reason for this detour: Look at the core of the worker thread from our load balancing broker:. That code isn't even reusable because it can only handle one reply address in the envelope, and it already does some wrapping around the ZeroMQ API. If we used the libzmq simple message API this is what we'd have to write:. And when code is too long to write quickly, it's also too long to understand. But when it gets in our way, we have to treat it as a problem to solve.

We can't of course just change the ZeroMQ API, which is a documented public contract on which thousands of people agree and depend. Instead, we construct a higher-level API on top based on our experience so far, and most specifically, our experience from writing more complex request-reply patterns.

What we want is an API that lets us receive and send an entire message in one shot, including the reply envelope with any number of reply addresses. One that lets us do what we want with the absolute least lines of code. Making a good message API is fairly difficult.

We have a problem of terminology: ZeroMQ uses "message" to describe both multipart messages, and individual message frames. We have a problem of expectations: And we have technical challenges, especially if we want to avoid copying data around too much. The challenge of making a good API affects all languages, though my specific use case is C.

Whatever language you use, think about how you could contribute to your language binding to make it as good or better than the C binding I'm going to describe. My solution is to use three fairly natural and obvious concepts: Here is the worker code, rewritten onto an API using these concepts:.

Cutting the amount of code we need to read and write complex messages is great: Let's continue this process for other aspects of working with ZeroMQ. This high-level binding, in fact, developed out of earlier versions of the examples.

It combines nicer semantics for working with ZeroMQ with some portability layers, and importantly for C, but less for other languages containers like hashes and lists. CZMQ also uses an elegant object model that leads to frankly lovely code. One thing CZMQ provides is clean interrupt handling. The high-level recv methods will return NULL in such cases.

So, you can cleanly exit a loop like this:. So how about reactors? The CZMQ zloop reactor is simple but functional. It rebuilds its poll set each time you add or remove readers, and it calculates the poll timeout to match the next timer. Then, it calls the reader and timer handlers for each socket and timer that need attention. The actual handling of messages sits inside dedicated functions or methods.

You may not like the style—it's a matter of taste. What it does help with is mixing timers and socket activity. Getting applications to properly shut down when you send them Ctrl-C can be tricky. If you use the zctx class it'll automatically set up signal handling, but your code still has to cooperate.

If you have nested loops, it can be useful to make the outer ones conditional on! If you're using child threads, they won't receive the interrupt.

To tell them to shutdown, you can either:. We can turn this upside down to get a very useful N-to-1 architecture where various clients talk to a single server, and do this asynchronously. The example runs in one process, with multiple threads simulating a real multiprocess architecture. When you run the example, you'll see three clients each with a random ID , printing out the replies they get from the server. Look carefully and you'll see each client task gets 0 or more replies per request.

If the workers were strictly synchronous, we'd use REP. However, because we want to send multiple replies, we need an async socket.

We do not want to route replies, they always go to the single server thread that sent us the request. Let's think about the routing envelope.

The client sends a message consisting of a single frame. The server thread receives a two-frame message original message prefixed by client identity. We send these two frames on to the worker, which treats it as a normal reply envelope, returns that to us as a two frame message. We then use the first frame as an identity to route the second frame back to the client as a reply.

Now for the sockets: Simplicity wins in this case. When you build servers that maintain stateful conversations with clients, you will run into a classic problem. If the server keeps some state per client, and clients keep coming and going, eventually it will run out of resources.

Even if the same clients keep connecting, if you're using default identities, each connection will look like a new one. We cheat in the above example by keeping state only for a very short time the time it takes a worker to process a request and then throwing away the state. But that's not practical for many cases. To properly manage client state in a stateful asynchronous server, you have to:. Let's take everything we've seen so far, and scale things up to a real application.

We'll build this step-by-step over several iterations. Our best client calls us urgently and asks for a design of a large cloud computing facility. He has this vision of a cloud that spans many data centers, each a cluster of clients and workers, and that works together as a whole.

Because we're smart enough to know that practice always beats theory, we propose to make a working simulation using ZeroMQ. Our client, eager to lock down the budget before his own boss changes his mind, and having read great things about ZeroMQ on Twitter, agrees. Several espressos later, we want to jump into writing code, but a little voice tells us to get more details before making a sensational solution to entirely the wrong problem. So we do a little calculation and see that this will work nicely over plain TCP.

It's a straightforward problem that requires no exotic hardware or protocols, just some clever routing algorithms and careful design. We start by designing one cluster one data center and then we figure out how to connect clusters together. Workers and clients are synchronous. We want to use the load balancing pattern to route tasks to workers. Workers are all identical; our facility has no notion of different services. Workers are anonymous; clients never address them directly. We make no attempt here to provide guaranteed delivery, retry, and so on.

For reasons we already examined, clients and workers won't speak to each other directly. It makes it impossible to add or remove nodes dynamically. So our basic model consists of the request-reply message broker we saw earlier.

Now we scale this out to more than one cluster. Each cluster has a set of clients and workers, and a broker that joins these together. There are a few possibilities, each with pros and cons:. Let's explore Idea 1. In this model, we have workers connecting to both brokers and accepting jobs from either one. However, it doesn't provide what we wanted, which was that clients get local workers if possible and remote workers only if it's better than waiting.

Also workers will signal "ready" to both brokers and can get two jobs at once, while other workers remain idle. It seems this design fails because again we're putting routing logic at the edges. So, idea 2 then. We interconnect the brokers and don't touch the clients or workers, which are REQs like we're used to. This design is appealing because the problem is solved in one place, invisible to the rest of the world. Basically, brokers open secret channels to each other and whisper, like camel traders, "Hey, I've got some spare capacity.

If you have too many clients, give me a shout and we'll deal". In effect it is just a more sophisticated routing algorithm: There are other things to like about this design, even before we play with real code:. We'll now make a worked example. We'll pack an entire cluster into one process. That is obviously not realistic, but it makes it simple to simulate, and the simulation can accurately scale to real processes. This is the beauty of ZeroMQ—you can design at the micro-level and scale that up to the macro-level.

Threads become processes, and then become boxes and the patterns and logic remain the same. Each of our "cluster" processes contains client threads, worker threads, and a broker thread. There are several possible ways to interconnect brokers.

What we want is to be able to tell other brokers, "we have capacity", and then receive multiple tasks. We also need to be able to tell other brokers, "stop, we're full". It doesn't need to be perfect; sometimes we may accept jobs we can't process immediately, then we'll do them as soon as possible. The simplest interconnect is federation , in which brokers simulate clients and workers for each other. We would do this by connecting our frontend to the other broker's backend socket. Note that it is legal to both bind a socket to an endpoint and connect it to other endpoints.

This would give us simple logic in both brokers and a reasonably good mechanism: The problem is also that it is too simple for this problem. A federated broker would be able to handle only one task at a time. If the broker emulates a lock-step client and worker, it is by definition also going to be lock-step, and if it has lots of available workers they won't be used.

Our brokers need to be connected in a fully asynchronous fashion. The federation model is perfect for other kinds of routing, especially service-oriented architectures SOAs , which route by service name and proximity rather than load balancing or round robin. So don't dismiss it as useless, it's just not right for all use cases.

Instead of federation, let's look at a peering approach in which brokers are explicitly aware of each other and talk over privileged channels. Let's break this down, assuming we want to interconnect N brokers. Each broker has N - 1 peers, and all brokers are using exactly the same code and logic. There are two distinct flows of information between brokers:.

Choosing good names is vital to keeping a multisocket juggling act reasonably coherent in our minds. Sockets do something and what they do should form the basis for their names.

It's about being able to read the code several weeks later on a cold Monday morning before coffee, and not feel any pain. Finding meaningful names that are all the same length means our code will align nicely.

It's not a big thing, but attention to details helps. For each flow the broker has two sockets that we can orthogonally call the frontend and backend. We've used these names quite often. A frontend receives information or tasks. A backend sends those out to other peers. The conceptual flow is from front to back with replies going in the opposite direction from back to front. For our transport and because we're simulating the whole thing on one box, we'll use ipc for everything.

This has the advantage of working like tcp in terms of connectivity i. Instead, we will use ipc endpoints called something - local , something - cloud , and something - state , where something is the name of our simulated cluster. You might be thinking that this is a lot of work for some names. Why not call them s1, s2, s3, s4, etc.? The answer is that if your brain is not a perfect machine, you need a lot of help when reading code, and we'll see that these names do help.

It's easier to remember "three flows, two directions" than "six different sockets". Note that we connect the cloudbe in each broker to the cloudfe in every other broker, and likewise we connect the statebe in each broker to the statefe in every other broker. Because each socket flow has its own little traps for the unwary, we will test them in real code one-by-one, rather than try to throw the whole lot into code in one go.

When we're happy with each flow, we can put them together into a full program. We'll start with the state flow. We can build this little program and run it three times to simulate three clusters. We run these three commands, each in a separate window:. You'll see each cluster report the state of its peers, and after a few seconds they will all happily be printing random numbers once per second.

Try this and satisfy yourself that the three brokers all match up and synchronize to per-second state updates. In real life, we'd not send out state messages at regular intervals, but rather whenever we had a state change, i. That may seem like a lot of traffic, but state messages are small and we've established that the inter-cluster connections are super fast.

If we wanted to send state messages at precise intervals, we'd create a child thread and open the statebe socket in that thread. We'd then send irregular state updates to that child thread from our main thread and allow the child thread to conflate them into regular outgoing messages.

This is more work than we need here. Let's now prototype the flow of tasks via the local and cloud sockets. This code pulls requests from clients and then distributes them to local workers and cloud peers on a random basis.

Before we jump into the code, which is getting a little complex, let's sketch the core routing logic and break it down into a simple yet robust design. We need two queues, one for requests from local clients and one for requests from cloud clients. One option would be to pull messages off the local and cloud frontends, and pump these onto their respective queues. But this is kind of pointless because ZeroMQ sockets are queues already.

So let's use the ZeroMQ socket buffers as queues. This was the technique we used in the load balancing broker, and it worked nicely. We only read from the two frontends when there is somewhere to send the requests. We can always read from the backends, as they give us replies to route back.

As long as the backends aren't talking to us, there's no point in even looking at the frontends. Randomly sending tasks to a peer broker rather than a worker simulates work distribution across the cluster. It's dumb, but that is fine for this stage. We use broker identities to route messages between brokers. Each broker has a name that we provide on the command line in this simple prototype.

As long as these names don't overlap with the ZeroMQ-generated UUIDs used for client nodes, we can figure out whether to route a reply back to a client or to a broker. Here is how this works in code. The interesting part starts around the comment "Interesting part". You can satisfy yourself that the code works by watching it run forever. If there were any misrouted messages, clients would end up blocking, and the brokers would stop printing trace information.

You can prove that by killing either of the brokers. The other broker tries to send requests to the cloud, and one-by-one its clients block, waiting for an answer. Let's put this together into a single package. As before, we'll run an entire cluster as one process. We're going to take the two previous examples and merge them into one properly working design that lets you simulate any number of clusters.

This code is the size of both previous prototypes together, at LoC. That's pretty good for a simulation of a cluster that includes clients and workers and cloud workload distribution. Here is the code:. This simulation does not detect disappearance of a cloud peer.

If you start several peers and stop one, and it was broadcasting capacity to the others, they will continue to send it work even if it's gone. You can try this, and you will get clients that complain of lost requests.

The solution is twofold: Second, add reliability to the request-reply chain. We'll look at reliability in the next chapter. This chapter looks at the general question of reliability and builds a set of reliable messaging patterns on top of ZeroMQ's core request-reply pattern. In this chapter, we focus heavily on user-space request-reply patterns , reusable models that help you design your own ZeroMQ architectures:. Most people who speak of "reliability" don't really know what they mean.

We can only define reliability in terms of failure. That is, if we can handle a certain set of well-defined and understood failures, then we are reliable with respect to those failures. No more, no less. So let's look at the possible causes of failure in a distributed ZeroMQ application, in roughly descending order of probability:. To make a software system fully reliable against all of these possible failures is an enormously difficult and expensive job and goes beyond the scope of this book.

Because the first five cases in the above list cover If you're a large company with money to spend on the last two cases, contact my company immediately! There's a large hole behind my beach house waiting to be converted into an executive swimming pool. So to make things brutally simple, reliability is "keeping things working properly when code freezes or crashes", a situation we'll shorten to "dies".

However, the things we want to keep working properly are more complex than just messages. We need to take each core ZeroMQ messaging pattern and see how to make it work if we can even when code dies.

In this chapter we'll focus just on request-reply, which is the low-hanging fruit of reliable messaging. If the server crashes while processing the request, the client just hangs forever. If the network loses the request or the reply, the client hangs forever. Request-reply is still much better than TCP, thanks to ZeroMQ's ability to reconnect peers silently, to load balance messages, and so on. But it's still not good enough for real work. The only case where you can really trust the basic request-reply pattern is between two threads in the same process where there's no network or separate server process to die.

However, with a little extra work, this humble pattern becomes a good basis for real work across a distributed network, and we get a set of reliable request-reply RRR patterns that I like to call the Pirate patterns you'll eventually get the joke, I hope. There are, in my experience, roughly three ways to connect clients to servers.

Each needs a specific approach to reliability:. Each of these approaches has its trade-offs and often you'll mix them. We'll look at all three in detail. We can get very simple reliable request-reply with some changes to the client. We call this the Lazy Pirate pattern. Rather than doing a blocking receive, we:.

This is slightly annoying when we want to use REQ in a pirate pattern, because we may send several requests before getting a reply. To run this test case, start the client and the server in two console windows.

The server will randomly misbehave after a few messages. You can check the client's response. Here is typical output from the server:. The client sequences each message and checks that replies come back exactly in order: Run the test a few times until you're convinced that this mechanism actually works. When we specify the elliptical calculation of spot position, the BCAM draws an ellipse around the spot that marks the border of the one it fitted to the spot.

When we ask the BCAM to fit a vertical line to a spot, we draw the vertical line over the spot, but clip the line to the boundaries of the spot. A good choice of intensity threshold is essential to accurate measurement of spot position. The BCAM allows us to specify the intensity threshold t , in several ways. The following table lists the meaning of the threshold symbols defined for the string. These symbols have the same meaning in the Dosimeter and WPS instruments.

The string "10 " applied to an image with average intensity 50 and maximum intensity gives a threshold of The " " instruction is more reliable when your image contains small spots. The average intensity is close to the background intensity. Dark points in the background caused by unusual noise or defects in the image sensor do not disturb the average intensity, so the threshold will be well-placed relative to the spot intensity and the background.

When a spot takes up half the width of the image, the average intensity is raised significantly above the background intensity by the intensity of the large spot.

In this case, we use the minimum intensity as our estimate of background intensity. Following p and the threshold symbol, we have an integer, n , which specifies a number of pixels, and a comparison symbol. If there is no symbol, we assume n is the minimum number of pixels in a spot.

Following n and the size symbol comes a real number, e , which specifies the maximum eccentricity a spot may have to qualify for measurement.

The eccentricity must be a value greater than one. We calculate the eccentricity first by dividing the longer side of the boundary rectangle by the shorter side. This gives us the eccentricity for vertical and horizontal ellipses. To account for ellipses at other angles, we divide the number of pixels in the spot by the number we expect from an ellips filling the rectangle, and obtain another eccentricity value.

We multiply the two eccentricities together to obtain the final estimate of eccentricity. This calculation is fast and produces a measure of eccentricity that works well enough for our existing applications.

The minimum number of pixels in a valid spot is If the eccentricity of the spot is greater than 2, it will be ignored. The maximum number of pixels in a valid spot is 2. This string is good for finding damaged pixels in images that also contain much larger and brighter optical features. The example results string in the screen shot is the one we get with the default BCAM parameter values.

These bounds are just large enough to enclose all the pixels of a spot, or slightly larger if the spot is only one or two pixels wide.

The brightness of the spot is not available in the standard BCAM result string. Instead, the standard result contains the number of pixels in the spot and the maximum intensity in the spot. This one gives the position of each of two large elliptical spots using the ellipse-finder. In addition to position the line gives the number of pixels above threshold in the spot, the maximum intensity gross intensity, not net intensity , the sensitivity to threshold, and the threshold itself, for both spots.

Here we see the left, top, right, and bottom edges of each of two rectangles. Each edge is given as an image column or row, as required by the orientation of the edge.

Here we see the total net intensity of each spot. The BCAM is accurate only so long as the laser image is bright enough to be undisturbed by the prevailing image noise, and yet not so bright as to saturate the CCD.

You can use the automatic exposure adjustment along with background subtraction. There are rare cases where the flash-adjustment algorithm does not converge. You can use the multiple-spot analysis to find all the spots in your image.

The above line will instruct the BCAM to flash elements 1 through 4 for 2 ms, 10 ms, 0. The BCAM will subtract a background image from the image of the laser or lasers. The BCAM takes one image with the laser flashing, and another with the laser turned off.

Both images have the same exposure to ambient light. We call the first image the foreground image, and the second the background image. The BCAM subtracts the background from the foreground to obtain the background-subtracted image. Negative intensities in the background-subtracted image are set to zero.

If a pixel in the background image is brighter than a pixel in the foreground image, the pixel in the background-subtracted image will be zero. We use background subtraction when we have ambient light that varies across the image sensor, or when we have significant dark current in the image caused by long exposures or by radiation damage. Provided that neither the foreground nor background image saturates, and provided that the ambient light or dark current does not change significantly between the two images, the background-subtraction will contain only the laser images.

If the parameter string is "0. We assume that this image will contain saturated pixels, so that its maximum intensity is the saturation intensity. Then it obtains another image while flashing non-existent lasers. We assume that the average intensity of this image is the background intensity. These might be 0. It captures one more image, and at this point, you should see all the light sources appearing in the image with roughly the same intensity. Some BCAM users don't like to perform background subtraction, because it slows down data acquisition.

Ambient light reflected from a ball bearing in the field of view might be at certain times of day be bright enough to be mistaken for a flashing laser. Even with background subtraction, sunlight passing through a ventilation fan might be bright during the foreground image, but dim in the background image. In such cases, you can use extended acquisition to set the analysis boundaries automatically. The analysis boundaries define the area in the image the analysis program considers when looking for spots.

Before adjusting the boundaries, however, you must obtain a BCAM image in which the lasers you flash are the brightest spots of light. You may have to pick a cloudy day, or work at night. You may have to cover up shiny ball bearings in the field of view. If this border is set to 0, then the analysis boundary adjustment is disabled. By default, its value is zero.

Try setting it to Perform the extended acquisition. You should see a new blue rectangle around your spots. Subsequent acquisitions from the BCAM will impose these boundaries upon the image as it comes in. If you are using the BCAM with the Acquisifier , you can go through all your BCAMs with one run through your Acquisifier script, and get them to set their flash times and analysis boundaries.

But you must be sure to include in your Acquisifier script a mention of all four of the analysis boundaries, and the flash time. If you don't mention these parameters, they will not be remembered by the script, and so they will not be put into place again the next time you capture from the same BCAM.

If the two addresses are the same, then the BCAM will recognize that they are the same, and share a single socket between the sources and sensor. But when the sensor and source are connected to separate IP addresses, the LWDAQ program must receive confirmation from the sensor socket that the sensor is ready for exposure before it sends the instructions to flash the laser to the source socket.

It must receive confirmation from the source socket that the flash is complete before it sends instructions to retrieve the image to the sensor socket. They will be a matter of milliseconds in a local area network, but could be hundreds of milliseconds across the global Internet.

The Camera Instrument captures images from an image sensor device, displays the image on the screen with your choice of intensification , and prints the image's dimensions, analysis boundaries, and intensity characteristics in the instrument panel. The Camera Instrument is defined by Camera. The Camera Instrument provides buttons for each of the image sensors it supports.

Press one of these buttons and the Camera will be configured instantly for the sensor named on the button. See above for a list of image sensors supported by the Camera, and the values the configuration buttons will assign to various Camera parameters. It takes pictures using ambient light only. There is no source device specified in the data acquisition parameters. To accommodate large differences in ambient light intensity, the Camera supports the anti-blooming and fast-move features of some TC and TC devices.

Neither fo these features are used or required by the ICXA image sensors. By default, anti-blooming and fast-move are enabled for TC and TC sensors. When we use anti-blooming with a device that does not provide anti-blooming, the anti-blooming has no effect. When we use fast-move with a device that does not support fast-move, the result is a streaky, white image.

No other device supports fast-move. The A , A , and A support anti-blooming. The Camera allows you to read images from the daq, memory, or file, just like any other instrument. The Camera allows you to manipulate these images before you display and analyze them.

Try writing "grad" as a manipulation and see what happens. You can specify multiple manipulations by listing their codes separated by spaces. The Camera will perform the manipulations consecutively. The original image will be replaced by the final product. The subtract and combine manipulations, for example, are not supported by the Camera. The Camera result contains ten numbers. The first four numbers are always integers, and they give the left, top, right, and bottom edges of the analysis boundaries.

The next four numbers are real-valued. They are the average, standard deviation, maximum, and minimum intensity of the image within the analysis boundaries. The last two numbers are always integers. They give the height and width of the image. The Diagnostic instrument provides an oscilloscope-like display of the power supply voltages on the driver board, and calculates the average current running out through each of them.

The result string provides the power supply voltages and currents, the most recent device loop time, and other diagnostic parameters. The instrument is defined by Diagnostic. The instrument provides extra buttons that allow you manipulate and test devices directly. When it acquires, the Diagnostic instrument reads the hardware, firmware, and software version numbers from the driver.

We change the voltage scale, offset, or coupling for the display by entering in new values in the entries dedicated to each of these variables. When we press return while in one of these entries, the display will refresh with the existing data. We change the seconds per division and the offset of the left edge from time zero the time of the first sample.

We can change the number of seconds per division for the display as well. The Diagnostic result string contains fourteen numbers. The common and differential gain are properties of the differential amlifier that the LWDAQ Driver uses to measure its power supply voltages and currents. The most recent loop time will be valid if a loop job has been executed since the driver has been reset. The data transfer speed applies to the TCPIP connection between the driver and the data acquisition computer.

The AE and AA allow you to turn on and off the data acquisition power supplies. If you turn the head power off on our demonstration stand, everyone else will get no data from it until they realize what you have done, and turn the power back on again. If you want to turn the power off and then on again, wait for several seconds after turning the power off, to let the circuits settle in preparation for power-on reset.

The Reset button resets the driver state machines, but not the devices. Some drivers turn off their data acquisition power supplies when you reset them, and others turn on their data acquisition power supplies.

The behavior of your driver will depend upon its firmware version number. The Sleep button sends to sleep the target device you specify with the config array entries. The Wake button wakes it up again. Because waking and sleeping are done automatically by all data acquisition instruments, you don't do any harm by sending a device to sleep. But you might bring down the demonstration stand power supplies if you wake up too many devices and forget to put them to sleep again. One way to put all devices attached to a driver to sleep is to use the Sleep All button.

Another way to send all devices to sleep is to press the Head Power Off button, count to ten, and press the Head Power On button. The loop time for a working cable is 5. The Transmit button transmits the commands you specify in hexadecimal to the target device. Enter a command, or a list of commands separated by spaces, in the command entry box next to the Transmit button, and press transmit. If you want the same single command transmitted multiple times in succession, enter a non-zero number in the repeat entry box.

If you combine a list of commands with a non-zero repeat value, each command will be transmitted multiple times before the next command is transmitted multiple times. The maximum value accepted by your driver's repeat counter depends upon the driver's firmware version. You can be confident, however, that you can go up to with any AE or AA.

You turn on the left laser with command hexadecimal. Press the Transmit button. You should see your laser turn on and stay on. To turn it off and send the BCAM to sleep, transmit You may also specify delays in milliseconds by adding an integer to this string. You separate the code words with one or more spaces. The string "reset off on " resets the controller, turns off the head power supplies, waits one second, turns on the power supplies, and waits for a hundred milliseconds.

The delays allow you to make sure that the power supplies have a chance to turn off fully, and to turn on again. In large LWDAQ systems containing devices with unreliable power-up reset, power supply cycling is essential to guarantee that all devices are in their sleep state and the power supplies are stable.

To help us deal with these problems, the Diagnostic instrument can turn off and on the power supplies automatically, check the power supply voltages, and repeat the same cycle until it sees the power supplies turning on properly.

The letters "psc" stand for "power supply check". If the power supplies lie within the ranges specified by the info array, acquisition ends. But if the supplies are out of range, the instrument acquires again. The power supply ranges are defined by elements in the info array.

Note that we use "max" to mean "most positive acceptable value" and "min" to mean "least positive acceptable value".

The string "off on " cycles the power supplies. Dosimeter The Dosimeter uses an image sensor to detect ionizing radiation and to measure image sensor dark current. By detecting ionizing radiation, the Dosimeter uses an image sensor to measure ionizing dose rate. By measuring dark current, the Dosimeter uses an image sensor to measure accumulated neutron damage. With the help of these controls, we can use the Dosimeter to take x-ray images with a pulsed x-ray source.

We introduced the Dosimeter in Hit Counting. Use the buttons in the Info panel to configure the dosimeter for a particular sensor.

The figure below shows a Dosimeter image taken with the TCP. All image sensors suffer cumulative damage from fast neutrons. This damage increases their dark current. Thus the image sensor dark current can be used as a measure of accumulated neutron dose.

The dark current is also a strong function of temperature. But a measurement of the dark current, combined with a measurement of the ambient temperature, provides us with an estimate of the cumulative fast neutron dose. Meanwhile, the transfer array of the image sensor acts as general-purpose radiation counter. The TC and TC transfer array is an aluminum-masked array of pixels adjacent to the image array. The ICX transfer array is beneath the image array, also masked from light.

The thin layer of silicon in the transfer array records electron-hole pairs generated by neutrons, photons, and charged particles. The Dosimeter's result string consists of four or more numbers. It is up to the user to convert counts per pixel into electrons per pixel or energy per pixel, using a knowledge of the image sensor. The Dosimeter calculates average charge density by first calculating the sum of intensity, then dividing by the number of pixels in the analysis boundaries.

To obtain the sum intensity, we compare the intensity of each pixel to a threshold. If the pixel intensity is less than the threshold, it does not contribute to the sum intensity. If the intensity is greater than the threshold, it contributes to the sum intensity the amount by which it exceeds the threshold.

This is the slope in intensity from top to bottom of the image. It is up to the user to convert counts per row into electrons per second per pixel, using a knowledge of the image sensor and the readout speed.

Following the charge density and dark current we have the average image intensity and the threshold used to compute the charge density. After this we may have one or more hits listed.

A hit is a spot of light in the image. Each spot is specified by its sum intensity, which is the sum its intensity above the threshold for charge detection. The Dosimeter clears charge out of the detection area and then activates or flashes a source of radiation. We specify the source with a driver socket, element number, multiplexer socket and device type. The LWDAQ driver sends the activation command to the radiation source, waits for the specified length of time, and sends the de-activation command.

We assume that the activated source will de-activate itself some time later, as specified by the value of the command word. The Dosimeter treats it as a hexadecimal representation of this sixteen-bit command word, and transmits it to the source device. Thus the Dosimeter can turn on a radiation source with a single command. We intend for the Dosimeter to either flash or activate a radiation source, but not both.

If both flashing and activation are specified, the Dosimeter prints a warning message in its text window. An example of a radiation source that activates with a single command is the X-Ray Controller AX. The AX uses the top eight bits of the command to determine its activation period, and deactivates itself. When hits are common, or fill the screen, the hit-counting and hit-detection are not useful. But let us suppose that the hits are rare: The Dosimeter provides background subtraction, so as to better distinguish between new radiation hits and permanent bright spots in the image.

Such bright spots can be caused by fast neutrons. When we apply background subtraction, we remove the dark current gradient and therefore make it impossible to deduce the dark current from the final image. The Dosimeter calculates the dark current before it subtracts the background, and stores the result in the image's result string. Before you try the flowmeter, we recommend you read through the help entry on the Thermometer.

The Flowmeter Instrument is defined by Flowmeter. The A has a heater circuit that runs 15 mA through a PT to heat it up, and then measures the time constant of its cooling to ambient temperature. Select a sensor for flow measurement in the same way you select a sensor in the Thermometer instrument.

The flowmeter measurements takes about ten seconds with the default instrument settings. In the first second, it measures the ambient temperature of the gas. In the next two seconds, it heats up the sensor. For the seven seconds after that, it records the temperature of the sensor.

When all the data has been transferred to the LWDAQ Driver, the Flowmeter displays it on the screen, and uses the last six seconds of the cool-down to calculate the time constant. The Flowmeter ignores the first second of cool-down after the end of the heating, because secondary time-constants manifest themselves during this first second, and degrade the accuracy of our measurement.

After a Flowmeter acquisition, you need to wait a few more time constants before you acquire again: You will see in the display two red graphs. One is a linear plot of the temperature. The other is a log-linear plot of temperature, and this plot extends only across the final six divisions of the display.

In green is the straight-line fit to the log-linear graph, which we transfer also into the linear plot so you can compare the actual temperature with the fitted-temperature curve. We obtain better than 0. The curve of inverse time constant versus flow rate must be measured, because it is not exactly linear. We have an electronically-driven proportional valve connected to our apparatus, and we can calibrate a sensor in five minutes automatically.

Acquire with the Flowmeter, open the Fan Switch tool, turn on the fan, and wait twenty seconds. You should see the cool-down is now more rapid. If not, then turn the fan off, wait, and try again. It could be that someone else left the fan on. The Gauge measures physical quantities such as temperature, resistance, or strain using a two-point calibration of a linear measuring device such as the Resistive Sensor Head A The Gauge Instrument is almost identical to the Thermometer , except it does not assume that the unit of the quantity you are measuring is Centigrade.

The Gauge Instrument is defined by Gauge. The Gauge measures the bottom and top reference voltages returned from an A or compatible device. When you acquire data from channels on the AS, the Gauge will return the resistance across each channel you acquire from.

The Gauge measures the resistance by drawing a straight line through the voltage it reads from the bottom and top reference resistors, and using this straight line to determine the resistance corresponding to the voltage it reads back from other channels.

These temperatures are Now the Gauge will convert resistance into temperature for you in its results line. The Gauge will convert the strain Gauge's resistance into ppm strain in its results line.

You can, of course, connect both RTDs and strain Gauges to the same AS, and you can read them all out in one acquisition. The Gauge will not convert some channels into temperature and others into strain. It does either temperature or strain or resistance or some other units.

We leave it to you to decide if you want to convert resistance to temperature and strain yourself, or if you want to configure and acquire from the Gauge Instrument once for your temperature sensors and a second time for your strain Gauges.

The Gauge's measurement method is accurate so long as the voltage returned by the measuring device is linear with the physical quantity it measures. All versions of the A are designed so that they are linear to within ppm of their dynamic range.

The Inclinometer , also called a tilt sensor , measures inclination in two directions using a liquid level sensor with five electrodes. The Inclinometer Instrument is defined by Inclinometer. The A holds the liquid level sensor and connects to the A via a six-way flex cable. For a picture of the two of them in a box, see here. The sensor has five electrodes. Four lie upon the corners of a square.

The fifth lies in the center of the square. We call that one CTR in the circuit diagram. In the Operation section of the Inclinometer Head Manual we describe how we measure the tilt of the sensor in the X and Y directions.

There you will also find a description of the elements returned in the Inclinometer Instrument result. A Rasnik Instrument takes an image of a chessboard pattern and determines the point in the chessboard projected onto a reference point in the image sensor.

Our Rasnik Instrument is defined by Rasnik. The NIKHEF laboratory in the Netherlands invented the Rasnik Mask , which is a chessboard with some squares switched from black to white, and other switched from white to black in such a way as to indicate to a camera which part of the mask it is looking at, even though the camera sees only a small portion of the mask.

We can configure the Rasnik Instrument for a variety of image sensors using buttons in its Info panel. By default, the Rasnik Instrument is configured for the TC sensor. We distinguish between the magnification of the mask in the x horizontal and y vertical directions.

You can see a Rasnik result in the figure above. If we specify an image in the LWDAQ image list, then the name of this image is the name at the beginning of the rasnik result.

The first and second numbers are the coordinates in the rasnik mask of the point in the mask that is projected by the rasnik lens onto the reference point in the CCD. The reference point does not strictly speaking have to lie within the CCD. It is a point in "image coordinates". Unlike the mask coordinates, which are right-handed, and proceed from the bottom-left corner of the mask, the image coordinates are left-handed, and proceed from the top-left corner of the top-left pixel in the image.

Positive x is left to right, and positive y is top to bottom. We can specify image coordinates in microns, or in pixels. The third and fourth numbers are the x and y direction magnifications. The x and y directions we refer to here are not those of the image coordinates, but rather of what we call "pattern coordinates".

The pattern coordinates are parallel to the mask squares. The x-direction is roughly left-to-right along the squares, and the y-direction is roughly top-to-bottom.

The fifth number is the rotation of the pattern coordinates with respect to the image coordinates, and so we can refer to it as the mask rotation. Positive rotation is anti-clockwise rotation of the image with respect to the CCD, or clockwise rotation of the mask when we look from behind the mask towards the CCD through the lens.

The sixth number is an estimate of the accuracy of the mask x and y measurement. We have improved this error estimate so that it now includes the error we get from using a reference point that is displaced from the center of the area in the image we use to determine the rasnik measurement. When we use the top-left corner of the image as our reference point coordinates 0, 0 , our measurement is less accurate than if we use the center of the analysis bounds.

This is because there is a stochastic error in our measurement of the rotation of the mask, and we must multiply this error by the distance from the center of the analysis bounds to the reference point to obtain the additional error caused by this displacement.

That is not to say, however, that we lose anything by using the top-left corner. Note that we assume the pixels are square, which has been the case for all the image sensors we have used in rasnik instruments. The ninth number is the orientation code.

Here are the names of each numerical orientation code, as they appear in rasnik. The x, y, and z axes we refer to in the names above are axes in the mask coordinates, with z being out of the mask. The mask has a nominal orientation, in which the x code increases from left to right as seen from the front of the mask, and the y code increases from bottom to top. The y-code, however, remains unaffected. The mask is in orientation 2.

The x-code will be unaffected. The mask is in orientation 3. The x-code will decrease from right to left and the y-code will decrease from bottom to top. The mask is in orientation 4. We do have one last orientation code. This orientation code never appears in the rasnik output, but you can specify it as an input to the rasnik analysis, in which case the analysis will try all orientations and pick what it thinks is the best one.

The tenth and eleventh numbers are the reference point the rasnik analysis used, specified in microns from the top-left corner of the top-left pixel. The default value of the reference code is zero. We prefer to use reference code 2, to select the center of the image, where errors in our measurement of the rotation of the mask pattern have the least effect upon our calculation of the point in the mask that is projected on to the reference point. Nevertheless, we often use reference code 0, which has the advantage of being a point in the image whose location is independent of pixel size and sensor width.

To specify an arbitrary reference point, we use a coordinate system in units of microns with its origin at the top-left corner of the top-left pixel in the image. By "image" we mean the array of pixels we display in the Rasnik Instrument, and which we can store to disk. The image contains all pixels available from the image sensor, as well as extra columns on the left and one or more extra rows on the top. The top-left pixel in the image does not exist in the image sensor at all, but is produced by our data acquisition system while we get ready to read out the first row of the sensor.

The x -axis runs from left to right across the image and the y -axis runs from top to bottom. A TCP image has columns and rows. The analysis boundaries are not included in the Rasnik output. The twelfth and thirteenth numbers in the Rasnik Instrument output string are the skew of the image in the x and y directions.

If the mask magnification increases from left to right, the horizontal lines in the pattern will diverge. We express this divergeance as the rate at which the horizontal line slope changes in radians per meter from left to right, and we call this the x -direction skew. The y -direction skew is the rate at which the slope of the vertical lines changes from top to bottom. The final parameter is the image slant in milliradians.

The slant is the amount by which the mask's vertical and horizontal edges are not perpendicular at the center of the analysis boundaries. We look at the lower-right quadrant created by the intersection of a vertical and horizontal edge near the center of the analysis bounds. The result will give the Rasnik measurement with respect to the new reference coordinates in the CCD. The Rasnik extended acquisition adjusts the exposure time in the same way as the BCAM's extended acquisition.

But the Rasnik's extended acquisition does not adjust its analysis boundaries. As you select different reference codes, we mark the reference point in the Rasnik Panel different ways. If you perform live capture with the -1 option, you can get a bit stuck trying to get to the Stop button after the OK button. This stops the Rasnik drawing its colored lines over the image. In particular, you enable the Rasnik Instrument's boundary adjustment process. It selects a smaller analysis rectangle at random within the original rectangle.

If this alternate rectangle produces a valid rasnik result, the analysis terminates and returns this result. Otherwise, the analysis routine writes one or two lines of red error messages to the instrument panel and then tries another rectangle. If it is successful, the analysis draws the rasnik measurement on the image. At the end of the bounds-adjusting analysis, the analysis boundaries in the analyzed image will be set equal to the rectangle within which the analysis obtained a valid result.

If it does not obtain a valid result, it leaves the original boundaries of the image intact. This display is entertaining when you are working directly on the machine performing the analysis, but if you are connected to via X-Windows, you will find the rectangle-drawing slows the Rasnik Instrument down. The Rasnik Instrument will perform various manipulations upon the acquired image before analysis begins.

Alternatively, we can smooth once and then shrink the image by a factor of two for analysis. Even though the image we analyze has been shrunk, the results of analysis will apply to the original image. The Rasnik Instrument scales the pixel dimensions by two, and even adjusts the display zoom so that intermediate images will be displayed the same size as the original. Shrinking the image by a factor of two accelerates computation time.

We can also shrink by a factor of three or even four. The rasnik analysis can find patterns such as vertical bars, or an image of wire mesh. But such patterns have no code squares. The result string contains seven numbers. The first two are the image coordinates of the top-left corner of one of the squares in the pattern.

The coordinates are in units of pixels, not microns. Position 0, 0 is the top-left corner of the top-left pixel of the image. The origin the image coordinates of the top-left corner of one of the squares in the chessboard.

The next two numbers are the width of the squares in the near-horizontal direction and their width in the near-vertical direction. Units are again pixels. The rotation is counter-clockwise in milliradians. The error is an estimate of the fitting accuracy in pixels.

The extent is the number of squares from the image center over which the pattern extends. As we describe in Large Rotations , the Rasnik Instrument can analyze rasnik images rotated at any angle with respect to the image rows, provided we know approximately what the angle is in advance. As we rotate the rasnik mask within the image, the rasnik analysis will calculate the rotation.

When the rotation exceeds one square divided by the image width, however, the rasnik analysis will fail. The Rasnik Instrument rotates the image clockwise by the nominal rotation before analyzing. Compose for MongoDB Powerful indexing and querying combined with automated scaling and backup. Compose for Redis Use counters, queues, lists and hyperlogs to handle complex data issues simply. Lift Migrate data quickly, easily and securely via a CLI. Apache Spark Versatile, open-source cluster computing framework with fast, in-memory analytics.

Decision Optimization A self-service decision environment designed to harness optimization-based support. Information Server on Cloud Understand, govern, create, maintain, transform and deliver quality data. Streaming Analytics Analyze a broad range of streaming text, video, audio, geospatial and sensor data.

Watson Studio Build and train AI models, and prepare and analyze data, all in one integrated environment. Watson Knowledge Catalog Intelligent data and analytic asset discovery, cataloging and governance to fuel AI apps. Watson Assistant Build and deploy virtual assistants. Watson Discovery Uncover connections in data by combining automated ingestion with advanced AI functions. Watson IoT Platform Leverage a fully managed, cloud-hosted service for device registration, connectivity, control, rapid visualization and data storage.

Watson Language Translator Dynamically translate news, patents or conversational documents. Watson Natural Language Classifier Interpret and classify natural language with confidence. Watson Natural Language Understanding Analyze text to extract metadata from content such as concepts, entities and sentiment.

Watson Visual Recognition Tag, classify and search visual content using machine learning. Watson Tone Analyzer Analyze emotions and tones in written content.

Watson Personality Insights Predict personality characteristics, needs and values through written text. Data Refinery A self-service data preparation tool for data scientists, engineers and business analysts. Deep Learning Design and deploy deep learning models using neural networks, easily scale to hundreds of training runs. IoT Platform Connect devices, gateways and networks, and manage, secure and analyze the associated data. Mobile Foundation Scalable, secure mobile access that simplifies integration with back end and cloud services.

Mobile Analytics Monitor performance and usage of all your applications from your desktop or tablet. Push Notifications Send real-time notifications to mobile and web applications. Continuous Delivery Provision toolchains, automate builds and tests and control quality with analytics.

Continuous Release Release templates to automate processes, collect approvals and orchestrate deployments. Globalization Pipeline Integrate machine and human translation into DevOps infrastructure. Cloud CLI A unified way to interact with apps, containers, infrastructure and services. Build transactional trust with a blockchain immutable ledger.

Blockchain Platform Accelerate the development, governance and operation of a multi-institution network.

Introduction

Share this: