Life is easy

Jon Jandai – Founder, Pun Pun Center for Self-reliance

There is one word that I have always wanted to say to everybody in my life. That word is “Life is easy.” It’s so easy and fun. Before that, I never think like that.

When I was in Bangkok, I felt like life is very hard, very complicated.

I was born in a poor village on the Northeastern of Thailand. And when I was a kid, everything was fun and easy, but when the TV came, many people came to the village, they said, “You are poor, you need to follow success for your life. You need to go to Bangkok to pursue success in your life.”

So I felt bad, I felt poor. So I needed to go to Bangkok.

University Learning

When I went to Bangkok, it was not very fun. You need to learn, study a lot and work very hard, and then you can get success.

I worked very hard, eight hours per day at least, but what I can eat is just a bowl of noodles per meal, or some Tama dish of fried rice or something like that.

And where I stayed was very bad, a small room with a lot of people slept. It’s very hot.

I started to question a lot. When I work hard, why is my life so hard? It must be something wrong, because I produce a lot of things, but I cannot get enough. And I tried to learn, I tried to study. I tried to study in the university.

It’s very hard to learn in university, because it’s very boring.

And when I looked at subjects in the university, in every faculty, most of them had destructive knowledge. There’s no productive knowledge in university for me. When I look at something like if you learn to be an architect or engineer, that means you ruin more. The more these people work, the mountain will be destroyed more. And a good land in Chao Praya Basin will be covered with concrete more and more. We destroy more.

If we learn agriculture faculty or something like that, that means you learn how to poison, to toxicate the land, the water, and learn to destroy everything. I feel like everything we do is so complicated, so hard. And everything we just make it hard.

Life is so hard and I felt disappointed.

I started to think about, why I have to be here in Bangkok? I thought about when I was a kid, nobody worked eight hours per day, everybody worked two hours, two months a year, planting rice one month and harvesting the rice another month. The rest is free time, ten months of free time. That’s why people have so many festivals in Thailand, every month they have festival. Because they have so much free time.

And then in the daytime, everybody even takes a nap. Even now in Laos, if anybody go to Laos if you can, people take a nap after lunch. And after they wake up, they just gossip, how’s your son-in-law, how’s your wife, daughter-in-law. People have a lot of time, but because they have a lot of time, they have time to be with themselves.

And when they have time to be with themselves, they have time to understand themselves. When they understand themselves, they can see what they want in their life. So, many people see that they want happiness, they want love, they want to enjoy their life.

So, people see a lot of beauty in their life, so they express that beauty in many ways. Some people by carving the handle of their knife, very beautiful, the baskets they are weaving very nicely. But, now, nobody does that. Nobody can do something like that. People use plastic everywhere.

So, I feel like it’s something wrong in there, I cannot live in this way of living. So, I decided to quit University, and went back home.

When I went back home, I started to live like I remember, like when I was a kid. I started to work two months a year. I got four tons of rice. And the whole family, six people, we eat less than half a ton per year. So we can sell some rice.

And then I dug two ponds, two fish ponds. We have fish to eat all year round. And I started a small garden. Less than half an acre. And I spend 15 minutes per day to take care of the garden. I have more than 30 varieties of vegetables in the garden. So, six people cannot eat all of it. We have a surplus to sell in the market. We can make some income in there, too.

House

So, I feel like, it’s easy, why I have to be in Bangkok for seven years, working hard and then not have enough to eat, but here, only two months a year and 15 minutes per day I can feed six people. That’s easy.

And before I think that stupid people like me who never get a good grade in the school, cannot have a house. Because people who are cleverer than me, who get number one in the class every year, they get a good job, but they need to work more than 30 years to have a house. But for me who cannot finish university, how can I have a house? Hopeless for people who have low education, like me.

But, then I started to do earthly building, it’s so easy. I spend two hours per day, from 5 o’clock in the morning, to 7 o’clock in the morning, two hours per day. And in three months, I got a house.

Servers 101: Cloud Servers vs. Dedicated Servers

What’s the difference between a Cloud Server and a Dedicated Server?
A Cloud Server is a process that runs on what’s called a hypervisor. This hypervisor software lets a single computer appear like it is a dozen or more computers. A single Cloud Server is just one of those processes. It appears to the user like a server, but it isn’t its own collection of hardware. A dedicated server is its own collection of hardware. It is a whole computer, complete with all hardware that makes up a computer including a processor, hard drive, RAM and power supply unit. eSecureData.com provides both Cloud Servers and Dedicated Servers to our clients.

Which is better, a Cloud Server or a Dedicated Server?
Cloud Servers can be scaled up and down with minimal fuss, using software only. You can increase or decrease RAM, Hard Drive space and even CPU power all dynamically using a software interface. Dedicated Servers require fussing with hardware to scale them up or down. This being said, Cloud Servers are at the mercy of their hypervisor and the hardware platform they’re running on. They almost never perform as well as dedicated servers. They can’t — they’re a process that’s sharing a computer with others, whereas the Dedicated Server is a totally self-contained computer sharing nothing with anyone.

What’s easier to back up?
Cloud Servers can be backed up in their entirety with very little fuss. Dedicated Servers require a bit more initial setup and scripting to back them up easily. Both require almost no maintenance once automated backups are configured.

What’s the difference between a Cloud Server and a Virtual Server?
Nothing. They’re the same thing. Some people use Cloud Server to imply failover capabilities and fault tolerance, but that’s not guaranteed in any way and shouldn’t be assumed.

Is it true that Cloud Servers are unaffected by what other cloud servers running on the same host computer are doing?
No. It is true that Cloud Servers run in separate spaces and are isolated as far as software can isolate them, but it is also true that if the other servers are using a lot of CPU cycles, there are fewer cycles available to your Cloud Server.

Can a Dedicated Server scale up and down like a Cloud Server?
No. To scale a dedicated server up or down, you need to replace hardware which can often be done very quickly and painlessly, but it still isn’t as easy as configuring software which is all that’s needed for a Cloud Server.

Which is easier to move between data centers or between backbones?
A Cloud Server is certainly easier to move between data centers. A Dedicated Server requires that you physically ship hardware, whereas a Cloud Server can be transferred over the Internet. Moving between backbones, however, is often easier with the Dedicated Server. At eSecureData, our VLANs are configured such that all Cloud Servers on a single host machine share the same VLAN. This means moving to a different backbone requires migrating the Cloud Server to another host machine. Our Dedicated Servers, however, can be switched to another backbone just by toggling their VLAN, which can be done entirely with software.

Wimax Base Station (BS)


A Wimax base station comprises of internal devices and a Wimax tower. A wimax base station can normally covers the area of about 50 kilometres or 30 miles radius, but some other and environmental issues bound the limits of Wimax range to 10 km or 6 miles. Any wireless user within the coverage area would be able to access the Wimax services. The Wimax base stations would use the media access control layer defines in the standard and would allocate uplink and downlink bandwidth to subscribers according to their requirements on real time basis.
Wimax Receiver (WiMAX CPE)
A Wimax receiver, which is also referred as Customer Premise Equipment (Wimax CPE), may have a separate antenna or could be a stand-alone box or a PCMCIA card that inserted in a laptop or a desktop computer. Access to a Wimax base station is similar to accessing a wireless access point (AP) in a Wi-Fi network, but the coverage is more.

So far one of the biggest restrictions to the widespread acceptance of WiMAX has been the cost of Wimax CPE (Wimax Receiver). This is not only the cost of Wimax CPE (Wimax Receiver) itself, but also that of installation. In the past, Broadband Wireless Access (BWA) have been predominantly Line Of Sight (LOS), requiring highly skilled labour and a truck role to install and provide a service to customer. The concept of a self-installed wimax CPE (Wimax Receiver) has been difficult for BWA from the beginning, but with the advent of Wimax technology this issue seems to be getting resolved.

Smarter Models for Wireless Broadband Access

The vast majority of the world is still waiting for broadband service. In the well-populated parts of developed countries, existing wireline phone and cable TV infrastructure made broadband relatively easy to offer, and availability is now very high. Everywhere else, the high costs of current options, such as deploying new wireline infrastructure or large-scale macro-cellular wireless models like WiMAX, are big barriers to further broadband subscriber growth.

Steven Glapa on Wireless Broadband AccessWe’ve solved this problem. Our field-proven Smart Wi-Fi approach to wireless broadband access opens vast new segments of customers to broadband operators. We dramatically reduce the capital costs of broadband infrastructure (by factors of 5x or more), which in combination with build-as-you grow business models can deliver attractively short breakeven timelines even in very low-ARPU markets.

The Ruckus Smart Wi-Fi Broadband Access System is the industry’s first and only complete end-to-end solution for last-mile access based on 802.11n and adaptive beamforming. It includes customer premise equipment, meshed Wi-Fi access points, high-speed wireless backhaul, and comprehensive network-wide element and service management. The solution’s unrivaled performance in wireless broadband applications is driven by several factors:

outstanding wireless coverage, capacity, and interference rejection from the industry-leading performance of Ruckus smart antenna technology
smart, hybrid meshing and simple point-to-point backhaul designs for straightforward deployment — with no PhD in radio-frequency physics required
smart QoS, for true carrier-class management of multi-play traffic and access control throughout the network
large-scale success in the crucial test of real-world deployments

WBA

Broadband Wireless Access (BWA) offers fast, easy, cost effective and flexible Wireless DSL deployments.

Airspan has one of the most comprehensive products and solutions portfolios in the industry. Our portfolio includes both Point-to-Point (PtP) and Point-to-Multipoint (PMP) products providing cost effective solutions to wide variety of applications. (NP).

Today our products are being used in a great number of diverse applications ranging from delivering wireless DSL to homes, businesses and schools in urban and rural settings to providing communication solutions to the oil and transportation industries

CIOs Still Need Technical Skills, at Least

Chief information of­ficers (CIOs) need to be business leaders. That is an incontro­vertible fact. Over the years, however, much has been writ­ten by people who believe that a CIO can be fully successful without knowing anything about what’s re­ally keeping the lights on. The tone of these articles often seems to sug­gest that companies can simply pluck someone out of finance or sales and thrust them into the job of the CIO and expect the rainbow and leprechaun to appear with the pot of gold. After all, IT isn’t really about technology, right? It’s about getting business done using technology.

Before I get into a discussion, let me ask a follow-up set of questions:

• Would you hire a CFO that had only ancillary experience in finance?

• Would you hire a VP for Sales that didn’t know the sales process from start to finish?

I’d bet that the answer to both of these questions is “No”. Why, then, do some companies believe that a CIO can simply be plucked from the fold without specific background or training in technology? After all, no matter how strategic the efforts of an IT group, if the “lights go out”, that’s all that’s going to matter. That is, if the basics that people have come to expect fail to be met, it won’t matter how experienced or in­experienced the CIO.

It’s important to understand that I don’t believe that the CIO needs to be the “alpha tech” in the office. Af­ter all, except in very small organi­zations, the CIO probably won’t be configuring switches, creating LUNs on a SAN or making sure that VMware is configured to fail over. However, the CIO should:

Know what is and is not possible – to a reasonable extent – with the network hardware on hand.

Understand – at least at a basic level – what it means to create a LUN and how much capacity there is in the organization.

Realize that VMware can be configured for automated fail over to meet disaster recovery requirements.

While the CIO must speak the language of the business in order to be taken seriously, without the re­spect of the IT staff, getting behind the CIO may be difficult. Charisma and business acumen alone may be enough to accomplish this goal, but I believe that most IT people want to work for someone that understands their daily work, what it really takes to get a job done and appreciates the effort and challenges that are inher­ent in the work.

In summary, I believe that CIOs must have at least some degree of technical knowledge. They need to understand what it really takes to keep the lights on.

This becomes ever more critical when it comes to prioritizing new projects and making a determination about what it will really take to im­plement a new project. The tech-savvy CIO will gain an understand­ing of the new project and be able to better weigh it against what appear to be “tech-only”-focused projects, such as implementing a new backup system, which, on the very surface, would appear to have no business benefits. Further, when considering new needs, a CIO that has a broad understanding of the technology en­vironment may be able to envision a quicker past to success that leverag­es existing systems. At the very least, the tech-savvy CIO will be able to “sit at two tables”. The first table is the executive table, the mag­ical place where business decisions are made. The second table: The IT Directors table where the high-level implementation details are dis­cussed.

I want to reiterate my opening sentence: CIOs need to be business leaders first and technologists sec­ond, but there needs to be a good balance.

Perhaps the pri­mary danger with an especially tech-sav­vy CIO is this: Get­ting too deeply in­volved in the tactical at the expense of the strategic. That may be the danger. Per­haps, fearing this outcome, compa­nies prefer someone who can’t focus on the tactical?

Breaking down Enterprise Storage

When considering storage solutions we are presented with a bewildering array of jargon and acronyms. SATA versus SAS, ICSI or Fibre Channel, SAN versus NAS for example. We need to be able to decode some of the marketing terms in use such as entry level, mid-range and high end. So let’s try to break down some of the techno babble. When evaluating an enterprise storage solution we really need to get down to analyzing our requirements in terms of per­formance, reliability, scalability, feature set, service and support. We then compare vendor offer­ings against our budget con­straints.

An enterprise storage solu­tion is essentially a combination of hard disk technology, con­trollers, and a transport protocol that allows data to be transferred to and from applications running on servers. Data is transferred over a network fabric and the system is controlled via man­agement software

When talking about enter­prise storage we are thinking primarily of SAN (Storage Area Network) infrastructure versus DAS (Direct Attached Storage) so let’s explore the difference and the benefits of SAN. In a Direct Attached Storage scenar­io hard disks are combined in an array and physically attached to a single server, either inter­nally or in an external chassis. The data blocks on the disks can only be manipulated by the server to which they are directly attached.

With a SAN infrastructure, a network fabric is introduced, along with management soft­ware allowing the disks to be presented to multiple servers at block level. Some hybrid so­lutions also allow the disks to be accessed as file shares over standard network file sharing protocols, i.e. they can also act as NAS (Network Attached Storage) appliances. The key to thinking about SAN storage is to understand that disk blocks are pooled into volumes which can then be presented over the SAN fabric to physical servers as if they were direct attached hard disks. This capability can provide enormous benefits when server and storage requirements grow in complexity. Some of the benefits can be summarized as follows:

Disk Utilization: in a direct attached storage scenario un­derutilized disks in a server cannot be exploited. With SAN storage you can pool the disks together and assign storage to servers as needed. There is no need to over specify disk capac­ity per server for future growth and as storage becomes free it can be reassigned.

Backup And Disaster Re­covery: using snapshot and replication technology you can centralize your backup and re­move this workload from heavily used servers. Since the storage is abstracted from the application server you can simplify fail over modes and reduce backup windows.

Management: when you need to assign new storage to a server this can all be handled remotely through a manage­ment console. There is no need for an engineer to visit the server and directly attach new disk, possibly requiring down­time.

Virtualization Capabilities: one of the main benefits of vir­tualization technology is the ability to do a live migration of a virtual machine from one host to another. Of course this is only possible if the storage is abstract­ed from the server and is not possible with direct attached storage.

Vendors tend to differentiate the products in their product lines as entry-level, mid-range or high end. These are marketing terms and there is no clear boundary but they reflect the technologies in use and of course the list price. So let’s examine a little how the ven­dors differentiate their product offerings.

An entry level storage sys­tem marketed as a NAS filer system may use hot-swap SATA disks combined in a RAID array with 1 or 10 giga­byte Ethernet interface. Such a system may be able to present data via CIFS or NFS proto­cols as file shares over TCP/IP. Such a system may also have SAN capabilities to allow vol­umes to be presented over iSCSI and may have snapshot and cloning capabilities. There will likely be some battery backed cache and a modular architecture allowing arrays to be connected via an interlink.

A mid-range system will have the above capabilities but will be marketed as a SAN so­lution with the primary inter­face being iSCSI or possibly FC (Fibre Channel). It may use near-line or mid-line hot-swap SAS disks and optional SSD (Solid State) disks for acceler­ation. It may offer compres­sion and inline de-duplication and will have dual controllers allowing for multipath connec­tions to the storage. There may be optional modules for CDP (Continuous Data Protection) to allow volumes to be repli­cated, either synchronously or asynchronously between re­mote arrays. A mid-range sys­tem will have less single point of failure components than an entry level system, e.g. hot swap power supplies, fans and controllers.

A high end system on the other hand will exhibit all the features of a mid-range system but will be designed to have zero single points of failure and many fault tolerant fea­tures, built in error correction, and redundancy. The interface will likely be Fibre Channel and the disk technology a mix of Fibre Channel disks and En­terprise SAS disks with Solid State Disk used to accelerate performance. A high end sys­tem will likely have advanced management features and smart tiering options to auto­matically serve frequently ac­cessed data from the highest performing disks and allow low value data to be served from less expensive disks.

When analyzing your re­quirements for enterprise stor­age solutions for your organi­zation look carefully at your technology options. If you need raw capacity but your data is transient you may wish to consider a solution based on SATA disks, however if high performance is a requirement you may look at a solution based on enterprise SAS disks with SSD acceleration. Since SAS disks have considerably better performance and reli­ability than SATA disks factor this into your decision making. Benchmark your applications to understand where perfor­mance bottle necks may occur. You need look at metrics like Input/output per second, ran­dom versus sequential read/ writes rates. If you are looking at a high end solution, consider whether you need a Fibre Channel network which will require expensive FC switches and Host Bus Adapters. Alter­natively you may take a con­verged network approach and build your storage network on Ethernet/IP. If high availability is your goal look at single points of failure in the solution and the MTBF (Mean Time Before Failure) values for the components. Look at the fea­ture set and manageability op­tions – how flexible are the snapshot/cloning facilities for example? Does the system give you Continuous Data Pro­tection options? When select­ing a vendor, study their sup­port terms, particularly their spare part supply chain. How quickly can they deliver a re­placement disk when one fails for example? There are many variables to consider when evaluating suitability of a stor­age solution and price ranges significantly across product lines. Understanding your re­quirements is the key to select­ing the right solution within your budget.

Some Storage Technology Acronyms:

• SATA – Serial Attached ATA

• ATA – Advanced Technolo­gy Attachment

• SAS – Serial Attached SCSI

• SCSI – Small Computer System Interface

• ISCSI – SCSI over Internet Protocol

• FC – Fibre Channel

• NAS – Network Attached Storage

• SAN – Storage Area Net­work

• DAS – Direct Attached Storage