Leaseweb vs fdcservers vs unmeteredservers.com/choopa

Hi,
I need few servers with 1Gbit dedicated unmetered BW and I need real 1Gbit BW, no limits per ip connection or something similar, good uptime, no downtimes, quick reboot or remote reboot, good support (when I need them, I don`t need managed servers)…I need best possible download speed in whole world, special in USA and europe…leaseweb or fdcservers or unmeteredservers.com/choopa or somebody else?

Thanks.

mariushm replied: Leaseweb will do 1gbps unmetered.
Check out net100tb.com, it’s in the same datacenter with Leaseweb, may even be Leaseweb resellers.100tb.com in US is very good.. 100tb by default, can upgrade to 1gbps unmetered.
heard only good things about choopa.com

might wanna try voxel.net for some quality bw but probably expensive.

Hope you do realize though that “best possible download speed” often means expensive bandwidth, at around let’s say 6-8$ per megabit or more.

In fact, if you want the best possible speeds in whole world, you’d be better off using a combination of several CDN’s like Cachefly, Akamai, Voxel and others… which have prices starting from 10 cents per GB.

IC3 Networks replied: leaseweb takes my vote.
I tried them before, I used to max out my speed from Eur, USA, CA and ME.
If you don’t need managed servers, then leaseweb is a good choice for you.I also saw many people complaining about fdcservers, I have no experience with them, so you better use the search function here to find more info on them.

As for choopa, I heard they are really good, but no personal experience.

mercmarcus replied: I agree mariushm,In USA use 100tb.com , PERFECT!
In EU use net100tb.com, VERY VERY GOOD!

They both could upgrade you to unmetered 1 gig.
Now I am waiting a special solution from them 12×1.5tb drives and raid 10 config, with 64 gig’s of ram. which will serve as a cdn point for my customer.

 

Bernardoo replied: I recommend leaseweb OR dediserv.eu they are also resellers of leaseweb.net100tb.com also seem to resell leaseweb, so getting directly from Leaseweb might be the best option.

Leaseweb > *

 

For more information click here.

Why Debian is the best Linux Distribution

Debian (http://www.debian.org) is an excellent distribution of GNU/Linux. (A popular commercial alternative to Debian is Red Hat.) The releases of Debian are rock solid stable and come highly recommended. The Debian packaging system is well developed and acknowledge as an excellent piece of work. You can purchase the CD-ROM distributions of Debian inexpensively (see http://www.debian.org/distrib/vendors for a list of vendors) or burn your own CD-ROMs from images available on the net. This latter option is explored in this chapter.

Here are some specific advantages and benefits that distinguish Debian from other distributions:

  • Debian GNU/Linux makes it very simple to install new applications, configure old ones, and administer the system. The administrator does not have to worry about dependencies, library problems, or even overwriting previous versions of configuration files.
  • As a non-profit organisation Debian is more of a partner than a competitor with other distributions. Anyone can sign up as a Debian developer and be granted the same privileges as anyone else. There are currently over 870 active Debian developers. New work developed for Debian is available for all of the other Linux distributions to copy as soon as it’s uploaded to the Debian servers.
  • The Debian Free Software Guidelines are a critical component from a business standpoint. They specify the requirements for licenses of any package that is to be included with Debian. Debian conforms to the official GNU version of free software which means that every package included in Debian can be redistributed freely.
  • Debian is driven by policy. The formal and publicly available Debian policies have been developed over many years and are a mature response to dealing with the large task of maintaining such a distribution in a distributed manner. Various Debian tools (such as dpkg, apt-get, and lintian) effectively implement the policy and provide a guarantee of quality in the packaging.
  • Debian is an excellent choice for the development of software for all distributions of GNU/Linux. Because Debian’s processes, in terms of policies and packaging, are fair and visible and open standards conforming, Debian is a very clean and very carefully constructed distribution. Developments that occur on a Debian platform can thus easily be delivered or transferred to other GNU/Linux (and Unix) platforms.
  • It is difficult to upgrade a system from one RedHat release to another. Debian provides simple migration paths that are well trodden. No more re-installing the operating system just to upgrade to the new release.
  • Debian’s tools have the ability to do recursive upgrades of systems.
  • Debian deals with dependencies and will identify the required packages and install them and then install the package you want.
  • Debian packages can Suggest other packages to be installed, and it is left to the user whether to follow the suggestions or not.
  • Multiple packages can Provide the same functionality (e.g., email, web server, editor). A package might thus specify that it depends on a web server, but not which particular web server (assuming it works with any web server).
  • Debian has a utility to install Red Hat packages if you are desperate!
  • Debian does not overwrite your config files nor does the packaging system touch /usr/local except perhaps to ensure appropriate directories exist for local (non-Debian) installed data and utilities.
  • Red Hat uses a binary database for its package data while Debian (dpkg) uses text files. Debian is more robust (if a single file gets corrupted it’s less of a problem) and it is possible to fix or modify things by hand using a normal text editor if needed. (Debian’s apt-get uses a mixed approach: it uses the same text files as dpkg but uses a binary cache to also get the advantages of a binary database.
  • Red Hat packages rarely fix upstream file locations to be standards compliant but instead just place files whereever the upstream package happens to put them. Many upstream developers do not know about or conform to the standards. A minor example is that for a while the openssh rpms created /usr/libexec for the sftpd daemons, but libexec is a BSD standard and the Linux standard4.1 says such things should go in /usr/lib/<program> or /usr/sbin.
  • Generally speaking, Debian packages must be created by “qualified” developers (and there are thousands of them) who are committed to following Debian’s strict policies requiring such things as FHS compliance and never overwriting config files without permission. Only packages from these developers become part of the Debian archives.
  • Debian runs on more hardware platforms than any other distribution.
  • The Debian packaging philosophy is to keep packages in small chunks so that the user can choose what to install with a little more control.
  • Fedora reportedly interferes with its distribution to make it a less free offering. Its libraries are modified to disallow the compilation of applications that conflict with commercial interests of the MPAA/RIAA.

See also http://www.infodrom.org/Debian/doc/advantages.html.

Web Server comparison : Apache versus IIS

I ran across Apache at 56% – what is wrong? by /home/liquidat this weekend, and the resulting Digg thread, and enjoyed reading the age-old IIS vs. Apache debate waged by loyalists on both sides.  It is great to see the passion for Web servers still very much alive.  This is one of the reasons I love software…it is so much more than bits and bytes.  Software, good and bad, evokes an emotional response from users.  It frustrates the crap out of me when it doesn’t work like I want it to, and it makes me nod my head and say “cool…” when it does something really powerful that I don’t expect.

The IIS vs. Apache debate has been going on for a while, and reminds me of the Mac vs. Windows debate, which also never gets old.  I used to be a die hard Windows fan.  I got my hands on a Windows 95 beta and was so blown away by it.  I was one of those crazy kids that went to CompUSA at midnight the day it was released and bought my own copy.  Later in college I dual-booted into Linux so I could have access to gcc and all the great development tools we were using in class.  Now I run Mac OSX and Vista at home.

When I got out of college, I worked for a start-up ISP, and ended up focusing a lot of my energy on the Web hosting side of the business.  We started out with a Sun Ultra server, running Solaris, then deployed a bunch of Linux servers.  We used Zeus and Apache as a Web server.  They were both great.  I admire Apache for a lot of reasons.  It is a solid Web server with a great extensibility model, and is very reliable when run on Linux.

My history with IIS

I got my hands on IIS when it first came out in 1996.  At first it seemed like a toy (maybe because it was) but it quickly grew up.  With ASP in IIS 3.0 I fell in love.  After hacking so many CGI applications together in C or PERL, I was blown away at how productive I could be with ASP, especially when MDAC came out and made data access so easy.  If I had to make a bet, I’d guess this is one of the reasons people love IIS to this day:  it is easy to setup, use, and incredibly powerful to program against.

I pushed the IIS4/NT 4 option pack very hard at the company I worked for in 1997, and we deployed the last beta in production.  It required a reboot every day in order to run properly, and depending on which series of patches we installed, it sometimes required more, but it was worth it.  I remember once installing an Oracle patch one morning, on recommendation from an Orcale support engineer, that took out the entire server and required a full rebuild.  That was the day I learned to never install patches on a production server without first testing them. 🙂

IIS5 came out with Windows 2000, right as I joined Microsoft, and ended up being a disasterous release for the IIS team.  I remember sitting through meeting after meeting with customers who were hit by Code Red and Nimda, who were justifiably furiated by the impact the vulnerabilities had made on their business.  IIS wasn’t very popular inside the company at the time either, as these were the first broad-scale internet worm attacks against any Microsoft product, and it took time for others to realize: it can happen to you.

The IIS team learned some very hard lessons about security vs. features in 2001 and 2002.  We poured over our code, we hired independent contractors to come pour over our code, fuzz it, hack it, and try to break it.  The result is quite possibly the most secure and reliable Web server ever with IIS6 – released with Windows 2003 Server.  Don’t take my word, search http://secunia.com for IIS security issues yourself, and compare it to any other Web server product.

And with 2007 came IIS7 in Windows Vista, and later this year, with Windows Server “Longhorn”.  IIS7 is more like a “v1” release, than a “v7”.  I can honestly say it is the biggest release of IIS ever.  It has more fundamental improvements and new capabilities than any previous release of IIS, and hasn’t lost sight of the basics: security, reliability, performance.  I think it will change the Web server market.  If you’re already an IIS customer, there is a lot to look forward to with IIS7.  And if you haven’t checked out IIS for a while, or you are still worried about security or reliability, it is time to give IIS a second look.

Bad reasons to avoid IIS

If you’re saying to yourself:  IIS isn’t as secure as Apache, or isn’t as reliable, or isn’t as fast, you should think twice.

Security.  If you’re worried about IIS security vs. Apache, you’re concerns are outdated.  Check out http://secunia.com and compare IIS5 and IIS6’s track record for the last 4-5 years and compare it to Apache.  Having been on the IIS team during Code Red and Nimda I can tell you it was a very painful experience and one I don’t ever hope to re-live, nor do I wish it on my worst enemy.  The IIS team learned hard lessons in 2001, and the results speak for themselves.  Is IIS perfect?  Nope, it is still build by faliable humans and we make mistakes just like every other engineering team.

Reliability and Performance.  IIS6 included a new process model which can reliably host Web applications, and monitors them for health and responsiveness.  It can proactively recycle applications when they are unhealthy.  IIS7 takes this process model to the next level by automatically isolating each new site when it is created in its own Application Pool, and dynamically assigning a unique SID (identity) to the AppPool so it is isolated from all other sites on the box from a runtime identity perspective – without any additional management required.  It also isolates the configuration for the AppPool, so it is impossible to read configuration from other sites on the server.  This provides the ultimate Web server architecture for Windows – a high performance multi-threaded server that provides secure isolation of Web sites by default and is also agile enough to respond to poor health conditions and gracefully recycle applications

If you’re worried about IIS performance and reliability when running PHP vs. running on Apache, you’re concerns are definitely valid.  Up until recently there were only two ways to run PHP:  the slow way (CGI), and the unreliable way (ISAPI).  🙂  This is primarily a result of the lack of thread-safety in some PHP extensions – they were originally written for the pre-fork Linux/Apache environment which is not multi-threaded.  Running them on IIS with the PHP ISAPI causes them to crash, and take out the IIS process serving your application.

Fortunately, the Microsoft / Zend partnership has brought about fixes to these issues with many performance and compatibility fixes by Zend, and a FastCGI feature for IIS which enables fast, reliable PHP hosting.  FastCGI is available now in Tech Preview form, and has also been included in Windows Server “Longhorn” Beta 3.  It will be included in Vista SP1 and Longhorn Server at RTM.

Reasons you should check out IIS7 if you use Apache today

There are so many new capabilities in IIS7, it would turn this already long post, into a short novel to list them all.  If you want lots of specifics, go read through the IIS7 site.  Here are a few reasons you Apache users might be interested in looking at IIS7:

 

Text file configuration

Apache has httpd.conf – a simple text file for configuration – which makes it very easy to edit Apache configuration using text/code editors or write PERL or other scripts to automate configuration changes.  Since the configuration file is just a text file, it also makes it easy to copy configuration from one server to another.  Unfortunately, Apache does require the Administrator to manually signal Apache to reload configuration in order for changes to take effect.

Many IIS customers dread IIS’ configuration store – the ‘metabase’ – and for good reason.  It has been an opaque configuration store like the registry since it was introduced in IIS4, and while there are many tools and APIs to use to configure IIS with, nothing beats being able to open up your configuration in the text editor of your choice and directly change configuration settings.  With IIS7, all IIS configuration is now stored in a simple XML file called applicationHost.config, which is placed by default in the \windows\system32\inetsrv\config directory.  Changing configuration is as simple as opening the file, adding or changing a configuration setting, and saving the file.  Want to share configuration across  a set of servers?  Simply copy the applicationHost.config file onto a file share and redirect IIS configuration to look there for its settings.  And whether your configuration is stored locally on the hard drive, or on a file server, changes take effect immediately, without requiring any restarts.  All IIS configuration settings are self-described in a schema file that can be accessed by going to \windows\sytem32\inetsrv\config\schema.  Adding new configuration to IIS is as simple as dropping a new schema file in this directory, registering it, and it automatically becomes available through IIS’ cmd-line tool and programmatic APIs.

Distributed Configuration (by default)

Apache supports distributed configuration with a feature called .htaccess.  It is a powerful feature that enables configuration for a Web site to be overriden using a simple text file in the content directory.  Unfortunately, due to the way it is designed in Apache, using it incurrs a huge performance hit.  In fact, the apache.org site recommends you avoid using it whenever possible.

IIS7 supports distributed configuration in web.config files, and has some important advantages over .htaccess.  Web.config is the file that ASP.NET uses today to store configuration, so developers now have a single file, format and API to use to target Web site / app configuration.  Imagine storing your PHP, Apache and Web Application settings in one file.  This distributed configuration support is very powerful, and allows for every per-URL configuration IIS property to be set in distributed configuration.  IIS7 caches web.config data, which avoids the per-request performance hit Apache suffers from.  The IIS implmenetation for distributed config is so good we’ve made it the default for a bunch of IIS configuration that we know developers typically want to set along with their Web sites.  For example, if you use any IIS7 tool to override the default document for a site or application, that setting will be stored in the web.config file for that directory by default.  Of course, you can override the default and store everything in IIS’ global configuration file if you want, and you can decide on a section-by-section basis which settings you want distributed, and which you want to keep centralized.  There is much more granulatiry in IIS’ configuration locking support over Apache, enabling you to even lock at the attribute level if desired.

 

Extensibility (C/C++/C#/VB.NET/and 30+ other languages…)

As I noted above, Apache has had a very modular architecture with powerful extensibility for many years.  Apache’s architecture has allowed many people to take it and add / modify / extend the Web server to do many custom things.  The resulting community modules for Apache has been impressive to watch.   IIS’ ISAPI extensibility hasn’t been a complete slouch: some of the world’s biggest application frameworks have successfuly run on ISAPI, including ASP, ASP.NET, ColdFusion, ActiveState PERL, etc.  Unfortunately, the number of successful ISAPI developers does seem to be smaller than the successful Apache mod developers, and the product team itself elected to rarely use ISAPI to build actual IIS features.

This all changes with IIS7.  With IIS7, IIS introduces a new native extensibility interface, CHttpModule, on top of which we ported all of the IIS features as a discrete, pluggable binary.  The IIS core Web server itself is a very thin event pipeline, and each of the IIS features can now be added and removed independently.  The extensibility point, CHttpModule, is much more powerful than ISAPI, and provides a fully asynchronous super-set support for extensions and filters.  Don’t like how IIS does XYZ feature, rip it out and replace it with your own: you have all the APIs the IIS team has.

Even more impressive, IIS7 introduces managed extensibility of the core Web server via the existing System.Web IHttpModule and IHttpHandler interfaces, enabling any .NET framework developer to extend IIS at the core and build a new, custom or replacement feature.  I showed this off in a recent blog post on how to build a SQL Logging module that can add to or replace the built-in W3C logging using .NET in less than 50 lines of code.

 

Advanced Diagnostics and Troubleshooting support

Whether you’re running IIS or Apache, troubleshooting problems can be a real bear.  Applications running in a high-performance, multi-threaded, console environment are very tough to debug, especially when in production use.  IIS7 innovates in several key ways to make the support for these situations far better than what you see with any other Web server.

First, IIS supports a feature called ‘failed request tracing’, which is really very cool.  Simply give IIS a set of error conditions to watch out for, based on response code or timeout value, and IIS will trap this condition and log a detailed trace log of everything that happened during the request lifetime that led up to the error.  Seeing requests timeout on a periodic basis, but not sure why?  Simply tell IIS to look out for requests that take longer than n seconds to complete, and IIS will show you ever step in the request lifetime, and including duration to complete each step.  And you’ll see the last event to have fired before the timeout to occur.  Are you seeing the dreaded “Server 500 Error – Internal Server Error”?  Tell IIS to trap this error and then browse through each step along the request to see where things went south.  I know of nothing like this with Apache.

IIS also supports real-time request monitoring and runtime data.  Want to know which requests are in flight on the server, how long they have been running, which modules they are in, etc?  IIS can tell you from the cmd-line, administration tool, or even programmatically via .NET and WMI APIs.  It is very easy to now look inside IIS and see what’s going on inside your Server.

Rich Administration APIs and Tools

This is an area where IIS has traditionally shined, and IIS7 takes the lead even further.  IIS7’s new administration tool is very simple and easy to use, but extremely powerful.  It is now feature-focused: simply click on a Web server, site or application and see every feature available to manage.  On the right hand pane there is a set of simple administration tasks for each scope that makes it easy to create new sites and applications, modify logging settings, or see advanced settings.  The administration tool remotes over HTTP, making it possible to manage the server locally or over the internet.  And the tool fully supports the distributed configuration model, making it possible to add ‘delegated’ administrators for Web sites and applications and allowing them to use Web.config or the same Administration tool to configure their Web site.  The administration tool is also completely modular, and built on top of a new extensibility framework, making it easy to add new features into the tool.

In addition to a rich administration tool, IIS also ships AppCmd.exe, a swiss-army knife for cmd-line administration.  With it, you can set any IIS setting, view real-time request and runtime information, and much more.

IIS7 also includes several programmatic interfaces which can be used to manage the server.  Sure, you can use PERL to hack away at the new text-based config file if you want, or you can use rich, object-oriented APIs in any .NET or script language if you prefer.  Microsoft.Web.Administration is a powerful new .NET api for programmatically managing the Server.  IIS7 also includes a new WMI provider for scripting management using VBscript or JScript.

 

Summary

IIS7 is a major overhaul of the Web server.  It builds on the rock-solid security and reliability of IIS6, and promises some very powerful new extensibility and management capabilities that meet and exceed what Apache can do today.  It’s already in Vista, so you can use it on the desktop today, and with Beta 3 it is available for free for production use through the GoLive program.

I’m quite certain this won’t end the debate of which is the better Web server, but I thought I’d add my two cents. 😉

 

Source : http://blogs.iis.net/bills/archive/2007/05/07/iis-vs-apache.aspx

Elance Vs ODesk Review – A Freelancer’s Perspective

As a freelance writer, there is one activity that takes up as much of my time as writing does and that’s looking for work. Once you find work, your next concern is whether or not you’ll get paid. And if I do get paid, how long will it take?

As a freelancer, there are numerous sites to choose from on which you can bid on projects. Two popular sites today are Elance and oDesk. From the homepage, you might think that these two sites are pretty similar. After all, they both state that there’s guaranteed work with guaranteed payment. A freelancer’s dream come true, right?

Let’s compare the two:

Elance Guarantees both Hourly and Fixed Price Work

All fixed price projects on Elance use escrow. Escrow is pretty straightforward and it’s safe for both the buyer and the provider. The buyer funds the account and they release it when the project is completed. If for some reason they forget to release the funds, they are automatically released 30 days later. It might take some time, but you’re guaranteed payment.

Elance is also able to guarantee their hourly projects as well. This is done by the provider using Tracker with Work View. Essentially Work View takes screenshots as you work on a project and hours billed must correspond. Your hours are also automatically paid when timesheets are sent, unless a client identifies certain hours as not being related to the project.

I don’t typically work on an hourly basis; it’s still good to know that your hourly work is guaranteed. And since 99% of my projects use escrow, I like having the security of escrow.

oDesk Only Guarantees Hourly Work

Although oDesk does guarantee that you will receive payment for hourly work (and it is tracked in a similar fashion to Elance), they don’t have escrow. This makes oDesk great for providers who do work on an hourly basis, but if you work on a project by project basis, there’s no escrow system to guarantee your payment.

Communicating through Elance

Elance offers a variety of tools to assist you in communicating with your clients as well as ensuring that the project flows smoothly. Once you are awarded a project, you have access to a Private Message Board with Real Time Chat, File Sharing with Version Control, Project Terms with Milestones and Comments, Status Reports and Timesheets, Autopay on hourly projects and Escrow for fixed price projects.

All of these tools allow you to document the project and all details associated with it. You can discuss the project prior to the award and after on the private message board; business terms are then set up with the necessary milestones and escrow is funded. Throughout the entire process, everything is documented so you can refer back to any messages to ensure you’re both on the same page.

Communicating through oDesk

oDesk offers the Work Diary, which tracks the amount of time you spend working on a project, but that’s about it. You don’t have the many tools that Elance offers you to help you work. In fact, there’s not even a private message board. Communication must occur through personal email, telephone or chat, whichever the provider and buyer agree upon. In some ways this is easier, however, emails can get lost or go to SPAM boxes, chats get turned off and phone calls can be missed. And there’s no communal workspace where all the communications are tracked. There’s ample opportunity for miscommunication here.

Quality of Projects

As a provider, the last thing you want to do is spend hours communicating with a potential buyer, who may not even have a whole lot of potential. Emails, phone calls, and chats all take time that you could be spending on your current projects or looking for serious projects. One way that Elance ensures quality projects is by testing the committal level of buyers. They do this by charging a $10 activation fee to ensure that the buyer has a legitimate form of payment to pay providers for the work they perform. This must be completed before the buyer can post any projects.

On oDesk, buyers can post as many projects as they want and communicate with all of the providers that they want and never even award a single project. They advertise that it’s “free to post jobs and interview contractors,” but that’s not necessarily a good thing. There’s nothing worse than talking with a lot of buyers who are just testing the waters and never result in a paying project.

Granted, there is nothing that requires a buyer to award a project on Elance, but it seems to attract a higher quality buyer and higher quality projects.

Another indication of quality of jobs are the budgets allowed by both websites. Elance has a minimum bid of $50, while oDesk actually has an estimated budget level of $5! Unfortunately, there are a few projects that most providers can do for only $5 and quality providers charge more than $5 per hour.

Conclusion

I am certain that there are numerous providers on oDesk that are doing well for themselves and that’s great. However, from my perspective and I’ve been doing this for several years, oDesk just doesn’t provide the level of security that I need as a provider. If my business is to be successful, then I have to know that I’m protected through the site that I choose to work through and pay a membership to. Elance offers me that security. Sure, there are times when projects go awry and I don’t always come out on the better end of the deal and I may lose money, but at least I know that the decision we came to is a fair one for both provider and buyer.

Escrow is the main focus of importance for me and the support system that surrounds it. Anyone that is venturing out into the freelance marketplace, either as a designer, writer, administrative assistant or other consultant, should consider the payment security system of any website they choose to work through long and hard. Sure, you can always charge half up front and half upon completion, but there’s no guarantee that you’re going to get that second half. Escrow ensures you get all that you’re due.

Words You Want is your one stop resource for SEO ghostwriting and eBook writing. Words You Want offers a variety of SEO writing services, pre-written ebooks in the eBooks To-Go store, linkbuilding, social media packages, SEO pacakges and more. Visit WordsYouWant.com and watch our animated videos to learn more about how Words You Want can help you with your online marketing campaigns and SEO.

Article Source: http://EzineArticles.com/?expert=Valerie_Mellema

Elance vs oDESK — another perspective

Outsourcing – Odesk Experiences vs. Elance

image After having a few bad freelance experiences (details here) with Elance.com I decided to look elsewhere for outsourcing certain web development tasks and research.

I have been using Odesk for a few weeks now and I can honestly say I like the service, the features, and the providers. The first thing I did when I created my account  was to sign up as a provider. I wanted to see what types of hoops a provider had to jump through in order to be listed.  The very first thing you must do is take a basic functionality and usability type of exam to ensure that you understand how Odesk functions. This was a painless although slightly annoying process but necessary in order to ensure basic understanding. Additionally, Odesk had other exams listed and recommended that I tak ea few in order to bolster confidence for my potential employer.

Exams

imageOdesk also offers internal testing and self evaluation for certain skills. Completing one of these exams adds ratings to your profile and I can tell you from experience that they aren’t a cake walk. Additionally, these are timed exams so you cant fake it by trying to Google the answers because you will likely run out of time. I consider myself knowledgeable in the network and systems security arena and I must say that the network security exam wasn’t easy. I expected it to have good questions but not tough ones and I would consider them to be a solid prescreening qualifier. However, there are those rare individuals that can pass any exam but have no practical experience and this is where the interview process and work portfolio comes into play.

Screening Candidates

imageAvoid the temptation to hire the candidate with the absolute lowest price. In my experience you will not be happy with the deliverable or perhaps an equally important factor, communication. Candidates with the lowest price may be trying to get a foot in the door and establish a reputation or they might just be cheap because they lack sufficient skills and you are paying them to learn. You can get good deals and find quality candidates at really low rates but it will be a gamble. Instead of price alone make sure you look at the cover letter, number of Odesk hours and feedback score. Now, regarding the feedback score, be sure to look at the total number of feedback entries vs. the score average. One or two entries may not be enough to provide the necessary assurances.

Payment

As a provider you will need to setup a credit card for automatic payment to your project providers. Payments for hourly services occur based upon the following schedule :

Monday

The work week begins at 12 a.m. GMT

Sunday

The work week ends at 11:59 p.m. GMT

The provider receives his/her timelog for review and is responsible for making sure it is accurate.

Any offline time should be added

All non-work time should be removed

Monday

The deadline for the Work Dairy is Monday 12 p.m. (Noon) GMT. At that time, the final timelog is sent to the Buyer for review and the dispute period begins.

The Buyer sees $X in Pending Debit

The Provider sees $Y in Pending Credit

Wednesday

The review period ends Wednesday evening, PST

Thursday

The Buyer’s invoice is now due.

Buyer will see a negative Balance in the “Your Balance” box on the top left of Provider Console

Buyer’s credit card is charged Thursday evening

Next Wednesday

The provider’s earnings become available.

Provider will see a positive Balance in “Your Balance” box on the top left of Provider Console

After Security Period has passed, provider can withdraw balance.

Summary

This post barely touches the surface of Odesk and the associated benefits as well as the vast number of features. So far I have enjoyed using the Odesk service and the detailed reporting features. I intend to continue documenting my experiences with Odesk as I learn more. Additionally, I will be reviewing other services as I become aware of them. If you have any service that you would recommend or any experience that you would like to share (positive or negative) along the lines of outsourcing services please feel free.

The On Demand Global Workforce - oDesk

My WordPress Custom Form Plug-in

I developed a Custom Form Plug-in to work in every versions of WordPress.  Here at WorPress blog, I cannot install it because of some restrictions. But it works in my personal web site.

This plug-in is sold and implemented by me. It’s a plugin which we can insert edit delete sort and select every type of data

Below are its snapshots :

 

 

This is the Admin Console of the plug-in. It can be restricted to users according to WordPress user security levels.

 

Conversion Optimization of your WebSite

In internet marketing, conversion optimization, or conversion rate optimization is the science and art of creating an experience for a website visitor with the goal of converting the visitor into a customer. It is also commonly referred to as CRO.

Web origins

Conversion optimization was born out of the need of lead generation and ecommerce internet marketers to improve their website’s results. As competition grew on the web during the early 2000s, Internet marketers had to become more measurable with their marketing tactics. They began experimenting with website design and content variations to determine which layouts, copy text, offers and images will improve their conversion rate. Many practitioners have contributed to the field, including Bryan and Jeffrey Eisenberg, Avinash Kaushik, Anne Holland, Tim Ash, Ayat Shukairy, Jonathan Mendez, Khalid Saleh, Chris Goward, Keith Hagen, Jon Correll and Zack Linford.

Why conversion optimization

Frequently, when marketers target a pocket of customers that has shown spectacular lift in an ad campaign, they belatedly discover the behavior is not consistent. Online marketing response rates fluctuate widely from hour to hour, segment to segment and offer to offer.

This phenomenon can be traced to the difficulty humans have separating chance events from real effects. Using the haystack process, at any given time marketers are limited to examining and drawing conclusions from small samples of data. However, psychologists (led by Kahneman and Tversky) have extensively documented tendencies which find spurious patterns in small samples, thereby explaining why poor decisions are made. Therefore, statistical methodologies can be leveraged to study large samples and mitigate the urge to see patterns where none exists.

These methodologies, or “conversion optimization” methods, are then taken a step further to run in a real-time environment. The real-time data collection and subsequent messaging as a result, increases the scale and effectiveness of the online campaign.

How conversion optimization works

Conversion Rate Optimization is the process of increasing website leads and sales without spending money on attracting more visitors by reducing your visitor “bounce rate”. Some test methods enable one to monitor which headlines, images and content help one convert more visitors into customers.

There are several approaches to conversion optimization with two main schools of thought prevailing in the last few years. One school is more focused on testing as an approach to discover the best way to increase a website, a campaign or a landing page conversion rates. The other school is focused more on the pretesting stage of the optimization process. In this second approach, the optimization company will invest a considerable amount of time understanding the audience and then creating a targeted message that appeals to that particular audience. Only then willing to deploy testing mechanisms to increase conversion rates. The article “a case against multi-variant testing” outlines some of the reasons testing should not be the only component in conversion optimization work.

Elements of the test focused approach to conversion optimization

Conversion optimization platforms for content, campaigns and delivery, then need to consist of the following elements:

Data collection and processing

The platform must process hundreds of variables and automatically discover which subsets have the greatest predictive power, including any multivariate relationship. A combination of pre- and post-screening methods is employed, dropping irrelevant or redundant data as appropriate. A flexible data warehouse environment accepts customer data as well as data aggregated by third parties. Data can be numeric or text-based, nominal or ordinal. Bad or missing values are handled gracefully. Data should be geographic, contextual, frequency, demographic, behavioral, customer, etc.

Optimization goals

The official definition of “optimization” is the discipline of applying advanced analytical methods to make better decisions. Under this framework, business goals are explicitly defined and then decisions are calibrated to optimize those goals. The methodologies have a long record of success in a wide variety of industries, such as airline scheduling, supply chain management, financial planning, military logistics and telecommunications routing. Goals should include maximization of conversions, revenues, profits, LTV or any combination there.

Business rules

Arbitrary business rules must be handled under one optimization framework. Some typical examples include:

  • Minimum (or maximum) weights for specific offers
  • “Share of voice” among all offers
  • Differential eligibility for different offers
  • Mutually exclusive offers
  • Bundled offers
  • Specified holdout sample

Such a platform should understand these and other business rules, then adapting targeting rules accordingly.

Real-time decision making

Once mathematical models have been built, ad/content servers use an audience screen method to place visitors into segments and select the best offers, in real time. Business goals are optimized while business rules are enforced simultaneously. Mathematical models can be refreshed at any time to reflect changes in business goals or rules.

Statistical learning

Ensuring results are repeatable by employing a wide array of statistical methodologies. Variable selection, validation testing, simulation, control groups and other techniques together help to distinguish true effects from chance events. A champion/challenger framework ensures that the best mathematical models are deployed always. In addition, performance is enhanced by the ability to analyze huge datasets and to retain historical learning.

See also

Source : Wikipedia  ( http://en.wikipedia.org/wiki/Conversion_optimization )

Search Engine Optimization (SEO)

Search engine optimization (SEO)

… is the process of improving the visibility of a web site or a web page in search engines via the “natural” or un-paid (“organic” or “algorithmic”) search results. Other forms of search engine marketing (SEM) target paid listings. In general, the earlier (or higher on the page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine. SEO may target different kinds of search, including image search, local search, video search and industry-specific vertical search engines. This gives a web site web presence.

As an Internet marketing strategy, SEO considers how search engines work and what people search for. Optimizing a website may involve editing its content and HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.

The acronym “SEO” can refer to “search engine optimizers,” a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term “search engine friendly” may be used to describe web site designs, menus, content management systems, images, videos, shopping carts, and other elements that have been optimized for the purpose of search engine exposure.

Another class of techniques, known as black hat SEO or spamdexing, uses methods such as link farms, keyword stuffing and article spinning that degrade both the relevance of search results and the user-experience of search engines. Search engines look for sites that employ these techniques in order to remove them from their indices.

History

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was submit the address of a page, or URL, to the various engines which would send a “spider” to “crawl” that page, extract links to other pages from it, and return information found on the page to be indexed.[1] The process involves a search engine spider downloading a page and storing it on the search engine’s own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase “search engine optimization” probably came into use in 1997.[2]

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page’s content. Using meta data to index pages was found to be less than reliable, however, because the webmaster’s choice of keywords in the meta tag could potentially be an inaccurate representation of the site’s actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[3] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[4]

By relying so much on factors such as keyword density which were exclusively within a webmaster’s control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.

Graduate students at Stanford University, Larry Page and Sergey Brin, developed “backrub,” a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[5] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[6] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[7]

By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. Google says it ranks sites using more than 200 different signals.[8] The leading search engines, Google and Yahoo, do not disclose the algorithms they use to rank pages. Notable SEO service providers, such as Rand Fishkin, Barry Schwartz, Aaron Wall and Jill Whalen, have studied different approaches to search engine optimization, and have published their opinions in online forums and blogs.[9][10] SEO practitioners may also study patents held by various search engines to gain insight into the algorithms.[11]

In 2005 Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[12] In 2008, Bruce Clay said that “ranking is dead” because of personalized search. It would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.[13]

In 2007 Google announced a campaign against paid links that transfer PageRank.[14] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting[15]. As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript. [16]

In December 2009 Google announced it would be using the web search history of all its users in order to populate search results [17].

Real-time-search was introduced in late 2009 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[18] This new approach to search places importance on current, fresh and unique content.

Relationship with search engines

By 1997 search engines recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.[19]

Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEO service providers. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web,[20] was created to discuss and minimize the damaging effects of aggressive web content providers.

SEO companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[21] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[22] Google’s Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[23]

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. Major search engines provide information and guidelines to help with site optimization.[24][25][26] Google has a Sitemaps program[dead link][27] to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Google guidelines are a list of suggested practices Google has provided as guidance to webmasters. Yahoo! Site Explorer provides a way for webmasters to submit URLs, determine how many pages are in the Yahoo! index and view link information.[28]

Methods

Getting indexed

The leading search engines, such as Google, Bing, and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click.[29] Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results.[dead link][30] Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review.[31] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that aren’t discoverable by automatically following links.[32]

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[33]

Preventing crawling

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine’s database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[34]

Increasing prominence

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to most important pages may improve its visibility.[35] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[35] Adding relevant keywords to a web page’s meta data, including the title tag and meta description, will tend to improve the relevancy of a site’s search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the “canonical” meta tag[36] or via 301 redirects can help make sure links to different versions of the url all count towards the page’s link popularity score.

White hat versus black hat

Main article: White or black hat

SEO techniques are classified by some into two broad categories: techniques that search engines recommend as part of good design, and those techniques that search engines do not approve of and attempt to minimize the effect of, referred to as spamdexing. Some industry commentators classify these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[37] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites will eventually be banned once the search engines discover what they are doing.[38]

A SEO tactic, technique or method is considered white hat if it conforms to the search engines’ guidelines and involves no deception. As the search engine guidelines[24][25][26][39] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.

White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to game the algorithm. White hat SEO is in many ways similar to web development that promotes accessibility,[40] although the two are not identical.

White Hat SEO is merely effective marketing, making efforts to deliver quality content to an audience that has requested the quality content. Traditional marketing means have allowed this through transparency and exposure. A search engine’s algorithm takes this into account, such as Google’s PageRank.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines’ algorithms, or by a manual site review. One infamous example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[41] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google’s list.[42]

As a marketing strategy

SEO is not necessarily an appropriate strategy for every website, and other Internet marketing strategies can be much more effective, depending on the site operator’s goals.[43] A successful Internet marketing campaign may drive organic traffic, achieved through optimization techniques and not paid advertising, to web pages, but it also may involve the use of paid advertising on search engines and other pages, building high quality web pages to engage and persuade, addressing technical issues that may keep search engines from crawling and indexing those sites, setting up analytics programs to enable site owners to measure their successes, and improving a site’s conversion rate.[44]

SEO may generate a return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. (Some trading sites such as eBay can be a special case for this; it will announce how and when the ranking algorithm will change a few months before changing the algorithm). Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[45] It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.[46] A top-ranked SEO blog Seomoz.org[47] has suggested, “Search marketers, in a twist of irony, receive a very small share of their traffic from search engines.” Instead, their main sources of traffic are links from other websites.[48]

International markets

Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines’ market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[49] In markets outside the United States, Google’s share is often larger, and Google remains the dominant search engine worldwide as of 2007.[50] As of 2006, Google had an 85-90% market share in Germany.[51] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[51] As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise.[52] That market share is achieved in a number of countries.[53]

As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable markets where this is the case are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.

Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[51]

Legal precedents

On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing’s claim was that Google’s tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google’s motion to dismiss the complaint because SearchKing “failed to state a claim upon which relief may be granted.”[54][55]

In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart’s web site was removed from Google’s index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart’s complaint without leave to amend, and partially granted Google’s motion for Rule 11 sanctions against KinderStart’s attorney, requiring him to pay part of Google’s legal expenses.[56][57]

See also

Source : Wikipedia ( http://en.wikipedia.org/wiki/Search_engine_optimization  )

Google Adwords API

What is the Google AdWords API?

The Google AdWords API lets developers build applications that interact directly with the AdWords platform. With these applications, advertisers and third parties can more efficiently and creatively manage their large or complex AdWords accounts and campaigns.

Flexible and functional. Use the AdWords API to build the application that meets your needs. Here are some possibilities:

  • Automatically generate keywords, ad text, and destination URLs.
  • Integrate AdWords data with your inventory system to manage campaigns based on stock.
  • Develop additional tools and applications to help you manage accounts.

Develop in the language of your choice. The AdWords API SOAP interface is supported by all popular programming languages, including Java, PHP, Python, .NET, Perl, and Ruby.

Signing up is easy. All you need to get started is an active AdWords account. Once you have registered as a developer, you can access your AdWords API Center to manage your token and budget settings.

oDESK Developer Contractor

oDesk Developer Wiki

The oDesk API Center is where users can access webservices APIs to build their own oDesk applications. Creative companies and contractors can build apps, incorporate oDesk in their dashboards and integrate oDesk features with their websites or management systems.

What can companies do?

  • Build their own team room apps
  • Incorporate oDesk into their desktop dashboards
  • Integrate oDesk with their website or management systems
  • Create custom provider search views and display these views off-site

What can contractors do?

  • Create their own work diary apps to better support their buyers
  • Integrate their oDesk profile into their own website or in other places where they’d like to promote their services

Technical details (please check the API Center documentation for full details):

  • Any oDesk user can access the oDesk webservices APIs
  • In order to fully utilize the API developers need to create API keys
  • The API is a REST API. It’s easy to use and can be used in pretty much any development language

http://www.youtube.com/v/YjrFa516ZQ4&hl=en&fs=1

Support

We will continuously release more APIs, update existing ones, and feature applications developed using the oDesk APIs.

If there is something that you would like to see and we do not yet offer, let us know in the feedback forums or drop us a line at apisupport@odesk.com.

There is also a community IRC channel #odesk running on freenode.net, so drop by and say hi:)

FAQ

Frequently asked questions regarding the API

My API Keys

Request new keys and view your existing keys here.

View the API documentation

oDesk has an open API that allows developers to query information from oDesk. Check out the full documentation and get started.

Recent updates

Check out the recent updates and leave us feedback in the API Center community.

Examples

See some examples of what others have done with the API on our examples.

Featured Apps

Interested in what others are working on using our API?  Would you like to add a new one? Check out whats been done so far here.