Universal Basic Income

Now is the time for a universal basic income in the United States.

Summary

  • A new flat tax on all income, in addition to existing taxes.
  • Distributions are exactly equal to all people
  • Tax revenue is set aside for immediate redistribution. It never touches the general budget
  • Distributions are based exactly on the revenue from the tax. There is no commitment to any particular amount

Details

  • The definition of ‘person’ who is eligible would require a bit of thought. At a minimum, all adult citizens should be included. Whether to include children or various types of non-citizens is debatable. In any case, anyone eligible for distribution must also pay the tax.
  • Tax is set aside for immediate redistribution (say, monthly). It never touches the general budget. It requires no ongoing authorization. It never gets invested in anything or held for any substantial length of time.
  • The calculations on which the payments are made might happen somewhat less frequently than the payments themselves, for example annual calculations but monthly payments. This probably makes things a bit more stable for people in the short-term.
  • Let’s not create a new administrative agency, let’s just use the Social Security Office. They’ve been doing a fine-enough job distributing cash to millions of Americans and have all the infrastructure in place. And this program is similar to SSI anyway.
  • All Income must be included in the taxable amount, including payments from this program. No exceptions. It particular it would include capital gains, dividends and carried interest. ALL income from ALL sources.

Advantages:

The system is self-adjusting. Because there is no guaranteed payment amount and it’s based just on the amount collected from the tax, then as people work less, the payment will be lower and people at the margin will more likely choose to work, pushing the tax revenue back up. As more people work, the tax and corresponding payments go up. There will always be some free riders – but that’s a feature, not a bug. If you have enough money and don’t want to work anymore – then don’t. That’s okay. The important thing is that because this program wouldn’t be means-tested, then there is never a dis-incentive to working more if you want more money. This contrasts with something like unemployment insurance and all other means-tested government programs, which DO penalize you for working more, by reducing or eliminating your benefit.

Not subject to the whims of politicians. Because the revenue from the tax never enters the general budget or is invested in any way, there is no way for politicians to use it for their own ends, or to tie it up in political battles.

It preserves freedom and dignity. An important part of being a free person is the ability to choose for yourself. It’s something that many of us take for granted, but any program that comes with strings attached, vouchers, subsidies or earmarks is a way for politicians and bureaucrats to control what the beneficiaries do and thereby take away their freedoms, dignity and basic humanity. Distributions from this program will be made equally to all, with no strings. People have the dignity and responsibility to choose for themselves.

Sourdough Bread

Lately I’ve been making some sourdough bread that everybody seems to love. It’s a very simple no-knead recipe that I got from Breadtopia . It took me a few tries to get it right, but now it’s pretty consistent. The ease of this no-knead method does come at a cost: time. You have to let it sit at least over-night, with best results coming after letting the dough sit for 18(!!) hours. You can let it sit as little as 12 hours, but it won’t be as sour.

There are four ingredients:

  • Sourdough starter – 1/4 cup
    • You can get a good one from Amazon, but there are lots of different choices or you can even make your own.
  • White bread flour – 18oz BY WEIGHT
    • White bread flour, not “all-purpose” or anything else. You can also substitute some or all for whole wheat flour, but then you’ll have to adjust the amount of water upward a bit.
  • Water – 12 oz BY WEIGHT
    • Purified is best
  • Salt – 1.5 teaspoons
    • Ordinary granulated table salt. Not coarse or koshering salt.

You also need some equipment:

  • A big bowl. Glass/pyrex works well, but whatever is fine as long as it’s at least 6 quarts or so.
  • A proofing basket.
  • Dutch Oven or La Cloche
  • A scale for measuring the ingredients.
  • Various measuring spoons and cups
  • A work surface, like a cutting board, that you can flour liberally as needed.
  • A bench scraper
  • Dough scraper or silicone spatula. (optional)
  • Wire rack (optional)

I’ll start with the assumption that you already have a good, active sourdough starter. If not, there are lots of instructions out there on how to buy or make your own and make/keep it active. Basically you should have at least a cup or so and when you feed it with more flour and water it should bubble actively and double in size within 12 hours.

Please note that this is a TWO DAY process, so plan ahead.

  1. Combine flour and salt in the big bowl and mix a bit.
  2. Combine sourdough starter and water and mix a bit until mostly dissolved.
  3. Pour liquid into bowl with dry ingredients.
  4. Mix around until mostly incorporated and difficult to stir.
  5. Flour hands liberally and fold dough over itself in the bowl a number of times to fully incorporate all the water and flour together. Scrape down the sides once or twice to try to get the everything mixed together.
  6. At this point you should have a ball-shaped blob of dough in the bottom of your bowl.
  7. Cover the bowl with plastic wrap and set aside ON THE COUNTER AT ROOM TEMPERATURE for about 18 hours or so. Do not refrigerate the dough. Yeast action is heavily temperature dependant.
  8. Come back after 18 hours and check your dough. It should have risen considerably – more than doubling.
  9. Flour a work surface liberally – I use a large wooden cutting board.
  10. Turn out the dough and use a scraper or silicone spatula to scrape the dough completely out onto the work surface.
  11. Gently push the dough out until it’s about 10×15 inches. Try not to massage the dough too much as you don’t want to lose the air bubbles.
  12. Fold it over twice the long way, like folding a letter. Then fold it once over itself the other direction so that you have a rough square.
  13. Cover the dough with plastic and let it sit again for 15min while you clean out the bowl and tidy up.
  14. Use your bench scraper to scrape up the dough into your hands and try to form it into a rough ball without messing with it too much, then dump it into your proofing basket.
  15. Let the dough sit in the proofing basket for 90min, but also:
  16. After the dough has been sitting in the proofing basket for 60min, pre-heat your oven to 500 degrees with the dutch oven inside it.
  17. After the dough has sat for 90min, open the oven with the hot dutch oven, take the lib off, and sprinkle some cornmeal inside. Then carefully invert the proofing basket over the oven and let the dough drop out, then put the lid back on the dutch oven and close the oven door.
  18. Bake the break for 30min, then take the lid off and turn the oven down to 450. Bake an additional 15min at 450.
  19. Take the bread out and set it on the wire rack to cool.

Flour mixed with salt.
Flour mixed with salt.

Dough after 18 hours. Risen and ready to go.
Dough after 18 hours. Risen and ready to go.

Dough thoroughly mixed. Ready to be covered.
Dough thoroughly mixed. Ready to be covered.

Flour and liquid mixed with spoon.
Flour and liquid mixed with spoon.

 

Profit

Recently I got into a discussion with a friend of mine about health care and how it gets paid for. We were comparing different flavors of private and public insurance and payment mechanisms. At some point during the conversation, he mentioned removing “the profit motive” as being a desirable attribute of some of the publicly paid options. It occurred to me only later that a lot of people don’t really understand what “profit” is, or what function it serves. In fact, many people have it exactly wrong!

Sometimes when I’m considering economic principals, I find it helpful to ignore money for a moment to get better clarity on what is actually happening. Let’s take a simple example of this and see if we can discover the role that profit really plays in society. Suppose that I own a furniture business and set about making a chair. I start with, say, 10 pieces of nice lumber (I don’t really know anything about furniture, so I’m making this up, obviously) and my employee spends 25 hours putting the chair together. I then give the chair to someone. Have I done something good for society? Well, let’s see: Society provided me with 10 pieces of lumber and 25 hours of labor. In return I provided society with a nice chair. Again – was this good? Hard to say, right? We’re talking about three different goods – lumber, labor and chairs – and those things don’t compare with each other very well.

In order to know whether I’ve done any good for anybody, I have to know what value society places on those three resources. What if that lumber came from the very last tree on the island? What if those hours of labor came on Christmas day? This is where money comes back into play. As the business owner, I have to pay whatever society thinks those things are worth to somebody else, before I can use them for myself. Likewise, when I give someone the chair, they have to pay me whatever society (including me) thinks that chair is worth. Now we’re getting somewhere. Let’s suppose that the lumber had a value of $50, the labor $100, and the chair $200. Now we can do a simple, straight comparison. $50(lumber)+$100(labor) = $150 of inputs from society, and the chair provided back $200 of value. So society gained $50 worth of net value from the process. Where did the value go? The value went to society in the form of a nice chair, less some wood and time. The money ended up in my pocket – profit!

Now we can see that profit is really just a measuring stick. It’s a way of keeping me informed and accountable for being productive. You can certainly imagine a scenario where all of the above takes place entirely without money. This is the dream some have of a sharing, (dare I say communist here?) structure. Everybody is working hard for the common good, and nobody needs money! I could just make that chair as before, not paying for the wood or labor, and not charging for the chair. But in that case, how would I – or anyone else – know whether it was a net positive or not? We couldn’t! Maybe it would have taken 100 hours of labor, or 1000. Who could say that it was or wasn’t worth it?

Businesses that continue to consume more from society than they contribute have shrinking bank accounts and are eventually forced out of business – as they should be. Business that consistently add value are profitable and grow and prosper, to society’s long term benefit.

Some folks still have the vague idea that profit here might be added value, but that the value goes into someone else’s bank account and is somehow “overhead” that might be reduced to our benefit. The opposite is true. Remember that money is not value. You cannot eat money. Money is how we maintain accountability. The value went to society already. You can’t put value in the bank. Society is already better off for the action (making the chair, in our example), and this would be true to the same degree, whether or not money is involved. Profit gives us a way to measure WHO created HOW MUCH value, and then gives them the ability to make additional decisions that we hope will create even more value (by buying more lumber to make more chairs).

Now you can see the potential danger in removing the profit motive: It eliminates measurement and accountability from the equation. In a program without profit, who can say if it’s doing a good job or not?! It’s impossible. In particular, with government programs, it’s usually the value of the output that goes un-measured. Costs are typically set by the market (society) and the government has to pay for them with money just like everybody else. What we can’t know is whether the outputs were really worth it, since the outputs don’t get paid for, or get paid for at non-market prices.

Say we spent 100 doctors treating 1000 patients over the course of a year, and they all ended up healthy. Was that a good thing? Who can say? Were they good doctors? Were the people sick in the first place? Could we have treated the patients with fewer doctors with the same result? Could we have instead treated 2000 other patients with those 100 doctors, to greater total benefit? How much time and effort was spent educating the doctors? Could 200 nurses have achieved the same result? Would that have been a better outcome?

“But wait!” you say “What about all the profit at companies that hurt society?!” A valid concern, especially in the way of the recent real-estate crash. Well, I won’t go to much into detail in this post as there’s a lot to say on the subject, but here’s a thought for you: Taking away profit doesn’t help that. It only hides it.

Here’s an idea for you: If you think it’s a good idea for the government to help people out, that’s great. Just don’t take away the ability to measure and hold accountable by taking away the profit. Just give cash (not a voucher – cash) to the people that you want to help. That’s really the best way to help people, but you won’t catch many people in the government going for it. They don’t want to be taken out of the loop or held accountable for success.

 

Data Center Power

Anybody who deploys equipment in a data center knows that power can be a complex subject. There are lots of different ways to measure it, charge for it, and deliver it. I intend here to shed some light on the various aspects and hopefully give you some ideas of how best deal with power in your own colocation data center space.

Provisioned Electrical Capacity (PEC) – The most basic way to think about power. This is the amount of power that the physical circuits delivered to you can support without tripping any breakers. You can easily calculate it as the breakered amps of the circuit times the voltage [times 1.73 for 3-phase, see below], times 80% as an industry standard safeguard. Note: This 80% is somewhat arbitrary; It mostly reflects the fact that breakers aren’t really that exact and you shouldn’t push it. So for a standard 20A@120v circuit, you get 20*120*.8 = 1920W, or about 2kW. This may or may not bear any correlation to how much power you are allowed to draw on those circuits or what you are charged. But it’s a place to start. By wary here, because often data centers won’t call attention to the fact that they won’t allow you to draw all the power you have provisioned. It’s in the fine print somewhere that you have a power cap lower than what the circuits could handle. You might want to do this, but it should be clear to you up front. See below.

Primary or Redundant Circuits – Primary and redundant circuits are exactly the same from a power point of view. They are both live and able to be drawn from. For the sake of redundancy it is best to get these from different breaker panels or different RPDUs. The key difference that you need to be aware of is that redundant circuits don’t do anything to increase your power cap. Primary circuits do. See below.

Power Cap – The maximum amount of power you are allowed to draw in some defined space. This is usually the sum of all primary circuit capacity, but not always. For example, if you have 2 primary and 2 redundant 20A/120v circuits, you power cap is probably 2x 1920W = 3.8kW. In some cases it will make sense to have a power cap lower than what you could draw on all your primary power circuits. This usually occurs when you need circuits of a particular type – with a fairly high breakered amperage – but don’t actually need to draw all that power. You don’t want to pay for all the power, and you don’t need the data center to allocate it to you, so you ask them to deliver the larger circuits, but give you a lower power cap.

Circuits Types – Power circuits come in many different flavors, but in the US most fall into some combination of three parameters: 120 or 208 volts, 20 or 30 amps, and single or three phase. Which of those you choose depends on a number of factors:

  • What type of power does your equipment require?
  • What type of PDUs do you want to use?
  • How much power do you actually expect to draw?
  • Can you use vertical PDUs?
  • Are you in open racks or closed cabinets?

Plug TypesWikiPedia has excellent information as always, but I’ll give you the short version. NEMA is the US standard, and it labels plug types as (L)XX-YY. L is optional and means that the plug twists to lock into place. The XX indicates the voltage, with 5 = 120v and 6 = 208v. YY is the amperage. You might also see a suffix of “r” or “p” indicating plug or receptacle. A very common one is L5-20, meaning a 20A/120v circuit, and the plug twists to lock in place.

Power Density Considerations – One of the most important criteria for choosing any data center space, whether a single cabinet or a 1000sqft cage, is the amount of power you can put into it. This will be limited by the data center’s policy, which is in turn limited by how much the facility is designed to support. I newer, more modern facilities you can expect to be allowed to draw 5-6kW per cabinet or rack, and 10kW or more in “high density” space. Don’t be confused between actual power draw and PEC, read above on that subject.

PDU Considerations – You will want to consider whether to use vertical or horizontal PDUs. My personal preference is to use one primary and one redundant vertical in each open rack, because it makes the cabling much easier. In cabinets I prefer horizontals simply because verticals won’t fit in standard-size cabinets that most data centers provide. I make it a point to always prefer my only option. I also always try to get the maximum number of outlets I can squeeze in, as this make it easier to maximize my draw. It’s irritating to pay for 16Amps of power, but to run out of outlets at only 13Amps.

I find the best tradeoff for cabinets to be two primary and two redundant 30A/120v circuits, with 16-outlet 2U horizontal PDUs. This allows me to draw over 5kW without trouble, but consumes 8U with PDUs.

In open racks, when I can use large vertical PDUs, I like one primary, one redundant 20A/208v/3-phase circuit, with 45 outlet PDUs. This makes cabling a breeze, and I can easily draw over 5kW in this case. Doing the math, my PEC would be 20*208*1.73*.8 = 5.7kW

I hope you’ve found this information useful. I’d welcome any other ideas you’d like to share regarding data center power.


Storage Terminology

I sat out to write a post about choosing the right storage. I found that so many of the topics that I wanted to discuss required some terminology explanation that I decided to make this post just about that. Stay with me for the next post when I’ll get into choosing actual storage.

Host – The server or application which will access the storage.

File System – This is the part of the storage system responsible for managing storage resources and brokering read and write transactions. It handles such things as block and file locking to ensure that no two systems are trying to write to the same parts of the disk at the same time. It also stores metadata about files such as which disk(s) they are located on, and where; filenames, sizes, etc.

SAN vs. NAS – These are the two [main] different ways to access shared storage systems. It is easiest to understand them in relation to each other. The key difference is where the file system resides.

– In a NAS, the file system is on the storage system itself. This allows multiple hosts to naively access data without worrying about all the technicalities mentioned above. The host can read and write to files blindly, content in the knowledge that the file system will not allow it to make a mistake. The downside of NAS is that it does not offer the application the ability to control the disks directly, and therefore the application cannot tune disks access to it’s own ends. Also, some of the helpful things that file systems do – like file locking – may pose a problem for certain types of applications.

– In a SAN the file systems resides on the hosts or applications that are connecting to it. The hosts themselves must carefully manage disk access between themselves in order to prevent data corruption. The is a fairly high development burden, so only applications that can truly benefit from highly customized disk access usually support it. In other cases it may be an explicit requirement that multiple hosts be able to access the same shared files. Some common ones are databases and various sorts of HA clusters, like with VMware for example.

Controller – This is the device that contains all the brainpower for a storage array. It’s really a suped-up, custom-built server, specifically designed for managing storage. It contains CPUs, RAM, and various storage-related add-on cards. This is where the file system will run, and all the disks will connect to it. They are often paired-up in a single chassis called a head unit to form an HA storage cluster. In large array clusters there may be multiple tiers of head units.

Disk Shelf – The disk shelves are the devices that hold the disks. They have little-to-no brainpower at all, and simply serve to power up the disks and pass-through all the read/write operations.

Three-tier architecture

I’ve been doing some thinking about application architecture and some of the problems that I’ve seen with that during implementation and development.

A three-tier architecture is often used in the context of web site development, for a couple of reasons:

  • It creates a clear separation of roles for each server component
  • It allows abstraction of each piece
  • Each role can be improved, updated, upgraded, or wholesale reworked, without impacting the rest of the system
  • Redundancy is easier to build in on a function by function basis

These are all great reasons, and most successful web properties do this in some fashion. The tiers may vary in number and type, but the goal is functional separation. Despite these worthy goals, often things go awry during implementation and ongoing development.

Take the case of a typical web-app-database structure. The user makes a request of the web front end. The web layer passes the request to the app layer for processing, which usually entails a call to the database. The user cannot interact with anything other than the web layer, and only the app layer can talk to the database.

At some point, another feature is required – and quickly. The only way to get it done right now is to make an exception (just this once) to allow the web layer to make a database query directly. The tier-structure is broken. A few months down the road, a dozen one-time exceptions later, and the whole thing is one big mess. Now you can’t really tell if the redundancy you thought you had is really there. You’re afraid to change any of the pieces. What’s worse is that you’re equally afraid to fix anything in case your “fix” breaks something else.

Another thing that I see happen quite often is to confuse a logical, functional tier-structure with one that is hard-wired into the network. This need not be the case, but it is often done this way. One can certainly develop the application with a multi-tier architecture, but that doesn’t mean this needs to be built into the network.

There are a couple of good reasons to segregate your network as well:

  • You have a very large network and need to deal with bandwidth and addressing issues
  • Security

For most of the small to medium sized networks that I’m talking about here, only the second of those will apply.

Unfortunately, there are also few good reasons not to segregate your network this way that aren’t taken into account:

  • The security you are trying to gain proves to be illusory. In most cases, the security breach will be a result of an application vulnerability rather than a network one.
  • If your web server is compromised, a malicious user is a hop-skip away from everything else. The network can’t provide any safeguards in that case. It must allow traffic from your web server to your app servers in order for your application to function. Again, you have to rely on a properly secured application to thwart the bad guy from digging deeper.
  • All of your internal traffic is now going through a router or firewall that will cost much more per Mb of traffic than if you did not do this. In most cases the vast majority of traffic is internal traffic that never leaves the application. You can buy a switch with a 48Gb backplane for under $2,000 today. A firewall that can pass 1Gb of traffic will cost over $50,000. And don’t forget the standby unit.

Despite all the good reasons not to segregate the network into multiple tiers, I still see this attempted all the time. I think it comes down to application developers not wanting to sacrifice development cycles for the sake of something that’s not a tangible feature. So by segmenting the network, which is conceptually easy, they can appease the security gods. Then when something goes wrong it’s a problem with the network.

My advice: Multi-tier architecture is great when enforced on application design, but not when applied to small-medium local networks. (Yes, that was a rather large qualification, but I want to be clear we’re not talking about WANs or campus-wide, inter-departmental networks here. We’re talking about web applications.)