Life is easy

Jon Jandai – Founder, Pun Pun Center for Self-reliance

There is one word that I have always wanted to say to everybody in my life. That word is “Life is easy.” It’s so easy and fun. Before that, I never think like that.

When I was in Bangkok, I felt like life is very hard, very complicated.

I was born in a poor village on the Northeastern of Thailand. And when I was a kid, everything was fun and easy, but when the TV came, many people came to the village, they said, “You are poor, you need to follow success for your life. You need to go to Bangkok to pursue success in your life.”

So I felt bad, I felt poor. So I needed to go to Bangkok.

University Learning

When I went to Bangkok, it was not very fun. You need to learn, study a lot and work very hard, and then you can get success.

I worked very hard, eight hours per day at least, but what I can eat is just a bowl of noodles per meal, or some Tama dish of fried rice or something like that.

And where I stayed was very bad, a small room with a lot of people slept. It’s very hot.

I started to question a lot. When I work hard, why is my life so hard? It must be something wrong, because I produce a lot of things, but I cannot get enough. And I tried to learn, I tried to study. I tried to study in the university.

It’s very hard to learn in university, because it’s very boring.

And when I looked at subjects in the university, in every faculty, most of them had destructive knowledge. There’s no productive knowledge in university for me. When I look at something like if you learn to be an architect or engineer, that means you ruin more. The more these people work, the mountain will be destroyed more. And a good land in Chao Praya Basin will be covered with concrete more and more. We destroy more.

If we learn agriculture faculty or something like that, that means you learn how to poison, to toxicate the land, the water, and learn to destroy everything. I feel like everything we do is so complicated, so hard. And everything we just make it hard.

Life is so hard and I felt disappointed.

I started to think about, why I have to be here in Bangkok? I thought about when I was a kid, nobody worked eight hours per day, everybody worked two hours, two months a year, planting rice one month and harvesting the rice another month. The rest is free time, ten months of free time. That’s why people have so many festivals in Thailand, every month they have festival. Because they have so much free time.

And then in the daytime, everybody even takes a nap. Even now in Laos, if anybody go to Laos if you can, people take a nap after lunch. And after they wake up, they just gossip, how’s your son-in-law, how’s your wife, daughter-in-law. People have a lot of time, but because they have a lot of time, they have time to be with themselves.

And when they have time to be with themselves, they have time to understand themselves. When they understand themselves, they can see what they want in their life. So, many people see that they want happiness, they want love, they want to enjoy their life.

So, people see a lot of beauty in their life, so they express that beauty in many ways. Some people by carving the handle of their knife, very beautiful, the baskets they are weaving very nicely. But, now, nobody does that. Nobody can do something like that. People use plastic everywhere.

So, I feel like it’s something wrong in there, I cannot live in this way of living. So, I decided to quit University, and went back home.

When I went back home, I started to live like I remember, like when I was a kid. I started to work two months a year. I got four tons of rice. And the whole family, six people, we eat less than half a ton per year. So we can sell some rice.

And then I dug two ponds, two fish ponds. We have fish to eat all year round. And I started a small garden. Less than half an acre. And I spend 15 minutes per day to take care of the garden. I have more than 30 varieties of vegetables in the garden. So, six people cannot eat all of it. We have a surplus to sell in the market. We can make some income in there, too.


So, I feel like, it’s easy, why I have to be in Bangkok for seven years, working hard and then not have enough to eat, but here, only two months a year and 15 minutes per day I can feed six people. That’s easy.

And before I think that stupid people like me who never get a good grade in the school, cannot have a house. Because people who are cleverer than me, who get number one in the class every year, they get a good job, but they need to work more than 30 years to have a house. But for me who cannot finish university, how can I have a house? Hopeless for people who have low education, like me.

But, then I started to do earthly building, it’s so easy. I spend two hours per day, from 5 o’clock in the morning, to 7 o’clock in the morning, two hours per day. And in three months, I got a house.


Servers 101: Cloud Servers vs. Dedicated Servers

What’s the difference between a Cloud Server and a Dedicated Server?
A Cloud Server is a process that runs on what’s called a hypervisor. This hypervisor software lets a single computer appear like it is a dozen or more computers. A single Cloud Server is just one of those processes. It appears to the user like a server, but it isn’t its own collection of hardware. A dedicated server is its own collection of hardware. It is a whole computer, complete with all hardware that makes up a computer including a processor, hard drive, RAM and power supply unit. provides both Cloud Servers and Dedicated Servers to our clients.

Which is better, a Cloud Server or a Dedicated Server?
Cloud Servers can be scaled up and down with minimal fuss, using software only. You can increase or decrease RAM, Hard Drive space and even CPU power all dynamically using a software interface. Dedicated Servers require fussing with hardware to scale them up or down. This being said, Cloud Servers are at the mercy of their hypervisor and the hardware platform they’re running on. They almost never perform as well as dedicated servers. They can’t — they’re a process that’s sharing a computer with others, whereas the Dedicated Server is a totally self-contained computer sharing nothing with anyone.

What’s easier to back up?
Cloud Servers can be backed up in their entirety with very little fuss. Dedicated Servers require a bit more initial setup and scripting to back them up easily. Both require almost no maintenance once automated backups are configured.

What’s the difference between a Cloud Server and a Virtual Server?
Nothing. They’re the same thing. Some people use Cloud Server to imply failover capabilities and fault tolerance, but that’s not guaranteed in any way and shouldn’t be assumed.

Is it true that Cloud Servers are unaffected by what other cloud servers running on the same host computer are doing?
No. It is true that Cloud Servers run in separate spaces and are isolated as far as software can isolate them, but it is also true that if the other servers are using a lot of CPU cycles, there are fewer cycles available to your Cloud Server.

Can a Dedicated Server scale up and down like a Cloud Server?
No. To scale a dedicated server up or down, you need to replace hardware which can often be done very quickly and painlessly, but it still isn’t as easy as configuring software which is all that’s needed for a Cloud Server.

Which is easier to move between data centers or between backbones?
A Cloud Server is certainly easier to move between data centers. A Dedicated Server requires that you physically ship hardware, whereas a Cloud Server can be transferred over the Internet. Moving between backbones, however, is often easier with the Dedicated Server. At eSecureData, our VLANs are configured such that all Cloud Servers on a single host machine share the same VLAN. This means moving to a different backbone requires migrating the Cloud Server to another host machine. Our Dedicated Servers, however, can be switched to another backbone just by toggling their VLAN, which can be done entirely with software.

Wimax Base Station (BS)

A Wimax base station comprises of internal devices and a Wimax tower. A wimax base station can normally covers the area of about 50 kilometres or 30 miles radius, but some other and environmental issues bound the limits of Wimax range to 10 km or 6 miles. Any wireless user within the coverage area would be able to access the Wimax services. The Wimax base stations would use the media access control layer defines in the standard and would allocate uplink and downlink bandwidth to subscribers according to their requirements on real time basis.
Wimax Receiver (WiMAX CPE)
A Wimax receiver, which is also referred as Customer Premise Equipment (Wimax CPE), may have a separate antenna or could be a stand-alone box or a PCMCIA card that inserted in a laptop or a desktop computer. Access to a Wimax base station is similar to accessing a wireless access point (AP) in a Wi-Fi network, but the coverage is more.

So far one of the biggest restrictions to the widespread acceptance of WiMAX has been the cost of Wimax CPE (Wimax Receiver). This is not only the cost of Wimax CPE (Wimax Receiver) itself, but also that of installation. In the past, Broadband Wireless Access (BWA) have been predominantly Line Of Sight (LOS), requiring highly skilled labour and a truck role to install and provide a service to customer. The concept of a self-installed wimax CPE (Wimax Receiver) has been difficult for BWA from the beginning, but with the advent of Wimax technology this issue seems to be getting resolved.

Smarter Models for Wireless Broadband Access

The vast majority of the world is still waiting for broadband service. In the well-populated parts of developed countries, existing wireline phone and cable TV infrastructure made broadband relatively easy to offer, and availability is now very high. Everywhere else, the high costs of current options, such as deploying new wireline infrastructure or large-scale macro-cellular wireless models like WiMAX, are big barriers to further broadband subscriber growth.

Steven Glapa on Wireless Broadband AccessWe’ve solved this problem. Our field-proven Smart Wi-Fi approach to wireless broadband access opens vast new segments of customers to broadband operators. We dramatically reduce the capital costs of broadband infrastructure (by factors of 5x or more), which in combination with build-as-you grow business models can deliver attractively short breakeven timelines even in very low-ARPU markets.

The Ruckus Smart Wi-Fi Broadband Access System is the industry’s first and only complete end-to-end solution for last-mile access based on 802.11n and adaptive beamforming. It includes customer premise equipment, meshed Wi-Fi access points, high-speed wireless backhaul, and comprehensive network-wide element and service management. The solution’s unrivaled performance in wireless broadband applications is driven by several factors:

outstanding wireless coverage, capacity, and interference rejection from the industry-leading performance of Ruckus smart antenna technology
smart, hybrid meshing and simple point-to-point backhaul designs for straightforward deployment — with no PhD in radio-frequency physics required
smart QoS, for true carrier-class management of multi-play traffic and access control throughout the network
large-scale success in the crucial test of real-world deployments


Broadband Wireless Access (BWA) offers fast, easy, cost effective and flexible Wireless DSL deployments.

Airspan has one of the most comprehensive products and solutions portfolios in the industry. Our portfolio includes both Point-to-Point (PtP) and Point-to-Multipoint (PMP) products providing cost effective solutions to wide variety of applications. (NP).

Today our products are being used in a great number of diverse applications ranging from delivering wireless DSL to homes, businesses and schools in urban and rural settings to providing communication solutions to the oil and transportation industries

CIOs Still Need Technical Skills, at Least

Chief information of­ficers (CIOs) need to be business leaders. That is an incontro­vertible fact. Over the years, however, much has been writ­ten by people who believe that a CIO can be fully successful without knowing anything about what’s re­ally keeping the lights on. The tone of these articles often seems to sug­gest that companies can simply pluck someone out of finance or sales and thrust them into the job of the CIO and expect the rainbow and leprechaun to appear with the pot of gold. After all, IT isn’t really about technology, right? It’s about getting business done using technology.

Before I get into a discussion, let me ask a follow-up set of questions:

• Would you hire a CFO that had only ancillary experience in finance?

• Would you hire a VP for Sales that didn’t know the sales process from start to finish?

I’d bet that the answer to both of these questions is “No”. Why, then, do some companies believe that a CIO can simply be plucked from the fold without specific background or training in technology? After all, no matter how strategic the efforts of an IT group, if the “lights go out”, that’s all that’s going to matter. That is, if the basics that people have come to expect fail to be met, it won’t matter how experienced or in­experienced the CIO.

It’s important to understand that I don’t believe that the CIO needs to be the “alpha tech” in the office. Af­ter all, except in very small organi­zations, the CIO probably won’t be configuring switches, creating LUNs on a SAN or making sure that VMware is configured to fail over. However, the CIO should:

Know what is and is not possible – to a reasonable extent – with the network hardware on hand.

Understand – at least at a basic level – what it means to create a LUN and how much capacity there is in the organization.

Realize that VMware can be configured for automated fail over to meet disaster recovery requirements.

While the CIO must speak the language of the business in order to be taken seriously, without the re­spect of the IT staff, getting behind the CIO may be difficult. Charisma and business acumen alone may be enough to accomplish this goal, but I believe that most IT people want to work for someone that understands their daily work, what it really takes to get a job done and appreciates the effort and challenges that are inher­ent in the work.

In summary, I believe that CIOs must have at least some degree of technical knowledge. They need to understand what it really takes to keep the lights on.

This becomes ever more critical when it comes to prioritizing new projects and making a determination about what it will really take to im­plement a new project. The tech-savvy CIO will gain an understand­ing of the new project and be able to better weigh it against what appear to be “tech-only”-focused projects, such as implementing a new backup system, which, on the very surface, would appear to have no business benefits. Further, when considering new needs, a CIO that has a broad understanding of the technology en­vironment may be able to envision a quicker past to success that leverag­es existing systems. At the very least, the tech-savvy CIO will be able to “sit at two tables”. The first table is the executive table, the mag­ical place where business decisions are made. The second table: The IT Directors table where the high-level implementation details are dis­cussed.

I want to reiterate my opening sentence: CIOs need to be business leaders first and technologists sec­ond, but there needs to be a good balance.

Perhaps the pri­mary danger with an especially tech-sav­vy CIO is this: Get­ting too deeply in­volved in the tactical at the expense of the strategic. That may be the danger. Per­haps, fearing this outcome, compa­nies prefer someone who can’t focus on the tactical?

Breaking down Enterprise Storage

When considering storage solutions we are presented with a bewildering array of jargon and acronyms. SATA versus SAS, ICSI or Fibre Channel, SAN versus NAS for example. We need to be able to decode some of the marketing terms in use such as entry level, mid-range and high end. So let’s try to break down some of the techno babble. When evaluating an enterprise storage solution we really need to get down to analyzing our requirements in terms of per­formance, reliability, scalability, feature set, service and support. We then compare vendor offer­ings against our budget con­straints.

An enterprise storage solu­tion is essentially a combination of hard disk technology, con­trollers, and a transport protocol that allows data to be transferred to and from applications running on servers. Data is transferred over a network fabric and the system is controlled via man­agement software

When talking about enter­prise storage we are thinking primarily of SAN (Storage Area Network) infrastructure versus DAS (Direct Attached Storage) so let’s explore the difference and the benefits of SAN. In a Direct Attached Storage scenar­io hard disks are combined in an array and physically attached to a single server, either inter­nally or in an external chassis. The data blocks on the disks can only be manipulated by the server to which they are directly attached.

With a SAN infrastructure, a network fabric is introduced, along with management soft­ware allowing the disks to be presented to multiple servers at block level. Some hybrid so­lutions also allow the disks to be accessed as file shares over standard network file sharing protocols, i.e. they can also act as NAS (Network Attached Storage) appliances. The key to thinking about SAN storage is to understand that disk blocks are pooled into volumes which can then be presented over the SAN fabric to physical servers as if they were direct attached hard disks. This capability can provide enormous benefits when server and storage requirements grow in complexity. Some of the benefits can be summarized as follows:

Disk Utilization: in a direct attached storage scenario un­derutilized disks in a server cannot be exploited. With SAN storage you can pool the disks together and assign storage to servers as needed. There is no need to over specify disk capac­ity per server for future growth and as storage becomes free it can be reassigned.

Backup And Disaster Re­covery: using snapshot and replication technology you can centralize your backup and re­move this workload from heavily used servers. Since the storage is abstracted from the application server you can simplify fail over modes and reduce backup windows.

Management: when you need to assign new storage to a server this can all be handled remotely through a manage­ment console. There is no need for an engineer to visit the server and directly attach new disk, possibly requiring down­time.

Virtualization Capabilities: one of the main benefits of vir­tualization technology is the ability to do a live migration of a virtual machine from one host to another. Of course this is only possible if the storage is abstract­ed from the server and is not possible with direct attached storage.

Vendors tend to differentiate the products in their product lines as entry-level, mid-range or high end. These are marketing terms and there is no clear boundary but they reflect the technologies in use and of course the list price. So let’s examine a little how the ven­dors differentiate their product offerings.

An entry level storage sys­tem marketed as a NAS filer system may use hot-swap SATA disks combined in a RAID array with 1 or 10 giga­byte Ethernet interface. Such a system may be able to present data via CIFS or NFS proto­cols as file shares over TCP/IP. Such a system may also have SAN capabilities to allow vol­umes to be presented over iSCSI and may have snapshot and cloning capabilities. There will likely be some battery backed cache and a modular architecture allowing arrays to be connected via an interlink.

A mid-range system will have the above capabilities but will be marketed as a SAN so­lution with the primary inter­face being iSCSI or possibly FC (Fibre Channel). It may use near-line or mid-line hot-swap SAS disks and optional SSD (Solid State) disks for acceler­ation. It may offer compres­sion and inline de-duplication and will have dual controllers allowing for multipath connec­tions to the storage. There may be optional modules for CDP (Continuous Data Protection) to allow volumes to be repli­cated, either synchronously or asynchronously between re­mote arrays. A mid-range sys­tem will have less single point of failure components than an entry level system, e.g. hot swap power supplies, fans and controllers.

A high end system on the other hand will exhibit all the features of a mid-range system but will be designed to have zero single points of failure and many fault tolerant fea­tures, built in error correction, and redundancy. The interface will likely be Fibre Channel and the disk technology a mix of Fibre Channel disks and En­terprise SAS disks with Solid State Disk used to accelerate performance. A high end sys­tem will likely have advanced management features and smart tiering options to auto­matically serve frequently ac­cessed data from the highest performing disks and allow low value data to be served from less expensive disks.

When analyzing your re­quirements for enterprise stor­age solutions for your organi­zation look carefully at your technology options. If you need raw capacity but your data is transient you may wish to consider a solution based on SATA disks, however if high performance is a requirement you may look at a solution based on enterprise SAS disks with SSD acceleration. Since SAS disks have considerably better performance and reli­ability than SATA disks factor this into your decision making. Benchmark your applications to understand where perfor­mance bottle necks may occur. You need look at metrics like Input/output per second, ran­dom versus sequential read/ writes rates. If you are looking at a high end solution, consider whether you need a Fibre Channel network which will require expensive FC switches and Host Bus Adapters. Alter­natively you may take a con­verged network approach and build your storage network on Ethernet/IP. If high availability is your goal look at single points of failure in the solution and the MTBF (Mean Time Before Failure) values for the components. Look at the fea­ture set and manageability op­tions – how flexible are the snapshot/cloning facilities for example? Does the system give you Continuous Data Pro­tection options? When select­ing a vendor, study their sup­port terms, particularly their spare part supply chain. How quickly can they deliver a re­placement disk when one fails for example? There are many variables to consider when evaluating suitability of a stor­age solution and price ranges significantly across product lines. Understanding your re­quirements is the key to select­ing the right solution within your budget.

Some Storage Technology Acronyms:

• SATA – Serial Attached ATA

• ATA – Advanced Technolo­gy Attachment

• SAS – Serial Attached SCSI

• SCSI – Small Computer System Interface

• ISCSI – SCSI over Internet Protocol

• FC – Fibre Channel

• NAS – Network Attached Storage

• SAN – Storage Area Net­work

• DAS – Direct Attached Storage

Hyper-V, a Virtualization Starting Point, Not VMware..


Saving costs and time by re­ducing dependence on too many sets of hardware and electricity consumed by cooling system for them, Hyper-V is a good choice to replace multiple, separate physical servers. Hyper-V enables you to consolidate multiple servers from running on a single physical machine, efficiently running operating multiple different systems simultaneously and leveraging the power of x64 computing.

Windows Server 2008 R2 was re­leased alongside that of a new feature of Hyper-V, called Live Migration. Live Migration utilizes Window Server 2008 R2 clustering features to enable you to migrate running VMs from a single physical computer to another without any interruption.

SSW can help you install, setup, man­age and backup your Hyper-V solu­tion. SSW runs its own Hyper-V in­frastructure consisting over 10+ physical servers and 50+ virtual ma­chines, so you can be sure that our consultants have hands on experience when it comes to supporting your Hyper-V in a critical business envi­ronment.


Are your servers in a data center? Reduce rack space expenditure by consolidating servers
Are your servers in your office? Re­duce power consumption and cooling costs by consolidating servers
Your server’s will be contained in a single Virtual Hard Disk (VHD) file – take copies of all your critical servers home with you on a USB drive!
Perfect for testing and development – create and test a wide variety of sce­narios in a safe, self-contained envi­ronment

Have an important software update to perform but are concerned it may affect your current system? Take a snapshot before you upgrade so you can roll back the changes quickly and easily
Backup your VMs while they are switched on using DPM2010
Need to perform maintenance on a physical server? No worries – live mi­grate all the VM’s off the physical host and start maintenance work with no down time


Why Choose Hyper-V over VM­Ware?

Hyper-V provides the following ad­vantages over VMWare.

You already own it if you have Win­dows 2008 or Windows 2008 R2
Virtual Hard Disk (VHD) files can be easily copied to a USB drive or network location for peace of mind
Cheaper than VMWare – the same solution, VMware costs double com­pared to Hyper-V
Backup a system while it is live us­ing Microsoft Data Protection Man­ager
A familiar interface – if you are com­fortable using Windows 2008 server you will be comfortable managing Hyper-V.

Technology Required For Hyper-V

A processor that supports Hardware Assisted Virtualization.
If you already have Windows Serv­er 2008 R2 in your business, you are ready to take advantage of virtualiza­tion and start consolidating your in­frastructure today!If you wish to take advantage of High Availably and Live Migration, a SAN and Windows Server 2008 R2 Enterprise Windows Server 2008 R2 Data Centre is required.

However, people might get it wrong, thinking that they can’t have a virtu­alization infrastructure with a small investment, when actually they can have it from a single PC that supports VT (virtualization technology). It’s okay that you can buy a motherboard able to slot many CPSs, RAMs and HDDs. You can make up a 2-core i7 (which is 8 core CPU) CPU, and then it makes 16 cores with 36GHz. And have RAM 24GB with 4 HDD, which is 1TB each, to make RAID-5. Then you can build a virtualization infra­structure. For this hardware, you may invest only around US$15,000 to have a complete virtualization infra­structure.

Now that you’ve got a virtualiza­tion infrastructure, you can have all-in-one servers—webmaster, e-mail server, domain controller, FTP server, database server, financial system server, and H.R. system server, per­haps with a billing system as well.

Is it affordable if servers are run in separate servers? Now look at the analysis of having a CPU with 36GHz and 24GB of RAM. I’m sure you can’t find it, but if you could, it will cost you around US$100,000.

Therefore, you should start your own virtualization infrastructures with a very powerful system, Hyper-V. But remember that Hyper-V is best suited with Windows OS, not VM­ware.

In the next issue, we will talk about when you should run VMware.


Global Telecommunications

he global communications industry is undergoing a massive transformation as service providers battle with falling subscriber revenues and re-focus on growth opportunities in emerging markets, technologies, and services.

In this new world of communications, success will go to those who identify, acquire and retain the most valuable customer segments; understand how emerging technologies and applications can be used to enable new revenue resources, and are aware of key issues involved in revenue generation. This new world order requires insight, analysis and authoritative direction on where to invest, which technologies to bank on, and which vendors to partner with.

Light Reading Communications Network advises leaders who build, deploy, finance, and regulate next-generation telecom networks on the most critical emerging market and service opportunities, enabling them to make informed business decisions.


The Light Reading Communications Network combines the most trusted telecom research brands with award-winning online communities and a rich events portfolio to deliver the only integrated business information platform serving the global communications industry.

Light Reading Communications Network offers a portfolio of media, research and performance-based marketing services to engage over half a million communications decision makers around the world. The Network portfolio includes research, consulting, content development, online and digital marketing, events, and lead generation programs that deliver results.

What makes us unique?

We are by far the largest telecommunications news and analysis website in the world, with more than 300,000 monthly visitors and 2.4 million monthly page views.
Our world renowned research divisions, Heavy Reading and Pyramid Research, give Light Reading unparalleled, industry-leading content.
Our editors provide up-to-the minute breaking news, and also do it with a sense of a humor.

10 Tips for a Total Online Security

With the sudden rise in the Internet usage across the globe over the past few years, there has also been a rise in the amount of online scams and frauds. Today most of the Internet users are unaware of the most prevailing online threats which pose a real challenge for their safe Internet usage. As a result, Online Security has become a questionable factor for the most Internet users. However it is still possible to effectively combat online insecurity provided that the users are well aware of the common scams and frauds and know how to protect themselves. A study shows that over 91% of the Internet users are unaware of the online scams and are worried about their security. Well if you are one among those 91% then here is a list of 10 tips to ensure your total online security.

  • Always install a good antivirus software and keep it up-to-date. Also install a good anti-spyware to keep your PC away from spywares. Click Here for a list of recommended anti-spyware softwares.
  • Always visit known and trusted websites. If you are about to visit an unknown website, ensure that you do not click on suspectable links and banners.
  • Perform a virus scan on the files/email attachments that you download before executing them.
  • Regularly Update your operating system and browser software. For a better security it is recommended that you surf the Internet through the latest version of your browser program.
  • Never share your password (email, bank logins etc.) with any one for any reason. Choose a strong password (A blend of alphanumeric+special symbols) and change it regularly, eg. every 3 months. Avoid using easy-to-guess passwords. (ex. pet’s name or kid’s name)
  • Always type the URL of the website in your browser’s address bar to enter the login pages. For ex. To login to your Gmail account type
  • Before you enter your password on any login page, ensure that you see https instead of http. ex. instead of HTTPS protocol implements SSL (Secure Sockets Layer) and provide better security than a normal HTTP. For more information on HTTPS and SSL see Know More About Secure Sockets Layer (SSL).
  • Beware of phishing emails! Do not respond to any email that request you to update your login details by clicking on a link in the body of the email. Such links can lead to Fake Login Pages (Spoofed Pages). For more information on phishing refer What is Phishing?. Also refer How to Protect an Email Account from being Hacked.
  • Always hit the logout button to close your login session rather than abruptly terminating the browser window. Also clear your web browser caches after every session to remove the temporary files stored in the memory and hard disk of your PC.
  • Avoid (Stop) using any public computers or computers in the Internet cafes to access any sensitive/confidential information. Also avoid such computers to login to your email/bank accounts. You cannot be sure if any spyware, keystroke-logger, password-sniffer and other malicious programs have not been installed on such a PC.