Leaseweb vs fdcservers vs unmeteredservers.com/choopa

Hi,
I need few servers with 1Gbit dedicated unmetered BW and I need real 1Gbit BW, no limits per ip connection or something similar, good uptime, no downtimes, quick reboot or remote reboot, good support (when I need them, I don`t need managed servers)…I need best possible download speed in whole world, special in USA and europe…leaseweb or fdcservers or unmeteredservers.com/choopa or somebody else?

Thanks.

mariushm replied: Leaseweb will do 1gbps unmetered.
Check out net100tb.com, it’s in the same datacenter with Leaseweb, may even be Leaseweb resellers.100tb.com in US is very good.. 100tb by default, can upgrade to 1gbps unmetered.
heard only good things about choopa.com

might wanna try voxel.net for some quality bw but probably expensive.

Hope you do realize though that “best possible download speed” often means expensive bandwidth, at around let’s say 6-8$ per megabit or more.

In fact, if you want the best possible speeds in whole world, you’d be better off using a combination of several CDN’s like Cachefly, Akamai, Voxel and others… which have prices starting from 10 cents per GB.

IC3 Networks replied: leaseweb takes my vote.
I tried them before, I used to max out my speed from Eur, USA, CA and ME.
If you don’t need managed servers, then leaseweb is a good choice for you.I also saw many people complaining about fdcservers, I have no experience with them, so you better use the search function here to find more info on them.

As for choopa, I heard they are really good, but no personal experience.

mercmarcus replied: I agree mariushm,In USA use 100tb.com , PERFECT!
In EU use net100tb.com, VERY VERY GOOD!

They both could upgrade you to unmetered 1 gig.
Now I am waiting a special solution from them 12×1.5tb drives and raid 10 config, with 64 gig’s of ram. which will serve as a cdn point for my customer.

 

Bernardoo replied: I recommend leaseweb OR dediserv.eu they are also resellers of leaseweb.net100tb.com also seem to resell leaseweb, so getting directly from Leaseweb might be the best option.

Leaseweb > *

 

For more information click here.

Advertisements

Why Debian is the best Linux Distribution

Debian (http://www.debian.org) is an excellent distribution of GNU/Linux. (A popular commercial alternative to Debian is Red Hat.) The releases of Debian are rock solid stable and come highly recommended. The Debian packaging system is well developed and acknowledge as an excellent piece of work. You can purchase the CD-ROM distributions of Debian inexpensively (see http://www.debian.org/distrib/vendors for a list of vendors) or burn your own CD-ROMs from images available on the net. This latter option is explored in this chapter.

Here are some specific advantages and benefits that distinguish Debian from other distributions:

  • Debian GNU/Linux makes it very simple to install new applications, configure old ones, and administer the system. The administrator does not have to worry about dependencies, library problems, or even overwriting previous versions of configuration files.
  • As a non-profit organisation Debian is more of a partner than a competitor with other distributions. Anyone can sign up as a Debian developer and be granted the same privileges as anyone else. There are currently over 870 active Debian developers. New work developed for Debian is available for all of the other Linux distributions to copy as soon as it’s uploaded to the Debian servers.
  • The Debian Free Software Guidelines are a critical component from a business standpoint. They specify the requirements for licenses of any package that is to be included with Debian. Debian conforms to the official GNU version of free software which means that every package included in Debian can be redistributed freely.
  • Debian is driven by policy. The formal and publicly available Debian policies have been developed over many years and are a mature response to dealing with the large task of maintaining such a distribution in a distributed manner. Various Debian tools (such as dpkg, apt-get, and lintian) effectively implement the policy and provide a guarantee of quality in the packaging.
  • Debian is an excellent choice for the development of software for all distributions of GNU/Linux. Because Debian’s processes, in terms of policies and packaging, are fair and visible and open standards conforming, Debian is a very clean and very carefully constructed distribution. Developments that occur on a Debian platform can thus easily be delivered or transferred to other GNU/Linux (and Unix) platforms.
  • It is difficult to upgrade a system from one RedHat release to another. Debian provides simple migration paths that are well trodden. No more re-installing the operating system just to upgrade to the new release.
  • Debian’s tools have the ability to do recursive upgrades of systems.
  • Debian deals with dependencies and will identify the required packages and install them and then install the package you want.
  • Debian packages can Suggest other packages to be installed, and it is left to the user whether to follow the suggestions or not.
  • Multiple packages can Provide the same functionality (e.g., email, web server, editor). A package might thus specify that it depends on a web server, but not which particular web server (assuming it works with any web server).
  • Debian has a utility to install Red Hat packages if you are desperate!
  • Debian does not overwrite your config files nor does the packaging system touch /usr/local except perhaps to ensure appropriate directories exist for local (non-Debian) installed data and utilities.
  • Red Hat uses a binary database for its package data while Debian (dpkg) uses text files. Debian is more robust (if a single file gets corrupted it’s less of a problem) and it is possible to fix or modify things by hand using a normal text editor if needed. (Debian’s apt-get uses a mixed approach: it uses the same text files as dpkg but uses a binary cache to also get the advantages of a binary database.
  • Red Hat packages rarely fix upstream file locations to be standards compliant but instead just place files whereever the upstream package happens to put them. Many upstream developers do not know about or conform to the standards. A minor example is that for a while the openssh rpms created /usr/libexec for the sftpd daemons, but libexec is a BSD standard and the Linux standard4.1 says such things should go in /usr/lib/<program> or /usr/sbin.
  • Generally speaking, Debian packages must be created by “qualified” developers (and there are thousands of them) who are committed to following Debian’s strict policies requiring such things as FHS compliance and never overwriting config files without permission. Only packages from these developers become part of the Debian archives.
  • Debian runs on more hardware platforms than any other distribution.
  • The Debian packaging philosophy is to keep packages in small chunks so that the user can choose what to install with a little more control.
  • Fedora reportedly interferes with its distribution to make it a less free offering. Its libraries are modified to disallow the compilation of applications that conflict with commercial interests of the MPAA/RIAA.

See also http://www.infodrom.org/Debian/doc/advantages.html.

Web Server comparison : Apache versus IIS

I ran across Apache at 56% – what is wrong? by /home/liquidat this weekend, and the resulting Digg thread, and enjoyed reading the age-old IIS vs. Apache debate waged by loyalists on both sides.  It is great to see the passion for Web servers still very much alive.  This is one of the reasons I love software…it is so much more than bits and bytes.  Software, good and bad, evokes an emotional response from users.  It frustrates the crap out of me when it doesn’t work like I want it to, and it makes me nod my head and say “cool…” when it does something really powerful that I don’t expect.

The IIS vs. Apache debate has been going on for a while, and reminds me of the Mac vs. Windows debate, which also never gets old.  I used to be a die hard Windows fan.  I got my hands on a Windows 95 beta and was so blown away by it.  I was one of those crazy kids that went to CompUSA at midnight the day it was released and bought my own copy.  Later in college I dual-booted into Linux so I could have access to gcc and all the great development tools we were using in class.  Now I run Mac OSX and Vista at home.

When I got out of college, I worked for a start-up ISP, and ended up focusing a lot of my energy on the Web hosting side of the business.  We started out with a Sun Ultra server, running Solaris, then deployed a bunch of Linux servers.  We used Zeus and Apache as a Web server.  They were both great.  I admire Apache for a lot of reasons.  It is a solid Web server with a great extensibility model, and is very reliable when run on Linux.

My history with IIS

I got my hands on IIS when it first came out in 1996.  At first it seemed like a toy (maybe because it was) but it quickly grew up.  With ASP in IIS 3.0 I fell in love.  After hacking so many CGI applications together in C or PERL, I was blown away at how productive I could be with ASP, especially when MDAC came out and made data access so easy.  If I had to make a bet, I’d guess this is one of the reasons people love IIS to this day:  it is easy to setup, use, and incredibly powerful to program against.

I pushed the IIS4/NT 4 option pack very hard at the company I worked for in 1997, and we deployed the last beta in production.  It required a reboot every day in order to run properly, and depending on which series of patches we installed, it sometimes required more, but it was worth it.  I remember once installing an Oracle patch one morning, on recommendation from an Orcale support engineer, that took out the entire server and required a full rebuild.  That was the day I learned to never install patches on a production server without first testing them. 🙂

IIS5 came out with Windows 2000, right as I joined Microsoft, and ended up being a disasterous release for the IIS team.  I remember sitting through meeting after meeting with customers who were hit by Code Red and Nimda, who were justifiably furiated by the impact the vulnerabilities had made on their business.  IIS wasn’t very popular inside the company at the time either, as these were the first broad-scale internet worm attacks against any Microsoft product, and it took time for others to realize: it can happen to you.

The IIS team learned some very hard lessons about security vs. features in 2001 and 2002.  We poured over our code, we hired independent contractors to come pour over our code, fuzz it, hack it, and try to break it.  The result is quite possibly the most secure and reliable Web server ever with IIS6 – released with Windows 2003 Server.  Don’t take my word, search http://secunia.com for IIS security issues yourself, and compare it to any other Web server product.

And with 2007 came IIS7 in Windows Vista, and later this year, with Windows Server “Longhorn”.  IIS7 is more like a “v1” release, than a “v7”.  I can honestly say it is the biggest release of IIS ever.  It has more fundamental improvements and new capabilities than any previous release of IIS, and hasn’t lost sight of the basics: security, reliability, performance.  I think it will change the Web server market.  If you’re already an IIS customer, there is a lot to look forward to with IIS7.  And if you haven’t checked out IIS for a while, or you are still worried about security or reliability, it is time to give IIS a second look.

Bad reasons to avoid IIS

If you’re saying to yourself:  IIS isn’t as secure as Apache, or isn’t as reliable, or isn’t as fast, you should think twice.

Security.  If you’re worried about IIS security vs. Apache, you’re concerns are outdated.  Check out http://secunia.com and compare IIS5 and IIS6’s track record for the last 4-5 years and compare it to Apache.  Having been on the IIS team during Code Red and Nimda I can tell you it was a very painful experience and one I don’t ever hope to re-live, nor do I wish it on my worst enemy.  The IIS team learned hard lessons in 2001, and the results speak for themselves.  Is IIS perfect?  Nope, it is still build by faliable humans and we make mistakes just like every other engineering team.

Reliability and Performance.  IIS6 included a new process model which can reliably host Web applications, and monitors them for health and responsiveness.  It can proactively recycle applications when they are unhealthy.  IIS7 takes this process model to the next level by automatically isolating each new site when it is created in its own Application Pool, and dynamically assigning a unique SID (identity) to the AppPool so it is isolated from all other sites on the box from a runtime identity perspective – without any additional management required.  It also isolates the configuration for the AppPool, so it is impossible to read configuration from other sites on the server.  This provides the ultimate Web server architecture for Windows – a high performance multi-threaded server that provides secure isolation of Web sites by default and is also agile enough to respond to poor health conditions and gracefully recycle applications

If you’re worried about IIS performance and reliability when running PHP vs. running on Apache, you’re concerns are definitely valid.  Up until recently there were only two ways to run PHP:  the slow way (CGI), and the unreliable way (ISAPI).  🙂  This is primarily a result of the lack of thread-safety in some PHP extensions – they were originally written for the pre-fork Linux/Apache environment which is not multi-threaded.  Running them on IIS with the PHP ISAPI causes them to crash, and take out the IIS process serving your application.

Fortunately, the Microsoft / Zend partnership has brought about fixes to these issues with many performance and compatibility fixes by Zend, and a FastCGI feature for IIS which enables fast, reliable PHP hosting.  FastCGI is available now in Tech Preview form, and has also been included in Windows Server “Longhorn” Beta 3.  It will be included in Vista SP1 and Longhorn Server at RTM.

Reasons you should check out IIS7 if you use Apache today

There are so many new capabilities in IIS7, it would turn this already long post, into a short novel to list them all.  If you want lots of specifics, go read through the IIS7 site.  Here are a few reasons you Apache users might be interested in looking at IIS7:

 

Text file configuration

Apache has httpd.conf – a simple text file for configuration – which makes it very easy to edit Apache configuration using text/code editors or write PERL or other scripts to automate configuration changes.  Since the configuration file is just a text file, it also makes it easy to copy configuration from one server to another.  Unfortunately, Apache does require the Administrator to manually signal Apache to reload configuration in order for changes to take effect.

Many IIS customers dread IIS’ configuration store – the ‘metabase’ – and for good reason.  It has been an opaque configuration store like the registry since it was introduced in IIS4, and while there are many tools and APIs to use to configure IIS with, nothing beats being able to open up your configuration in the text editor of your choice and directly change configuration settings.  With IIS7, all IIS configuration is now stored in a simple XML file called applicationHost.config, which is placed by default in the \windows\system32\inetsrv\config directory.  Changing configuration is as simple as opening the file, adding or changing a configuration setting, and saving the file.  Want to share configuration across  a set of servers?  Simply copy the applicationHost.config file onto a file share and redirect IIS configuration to look there for its settings.  And whether your configuration is stored locally on the hard drive, or on a file server, changes take effect immediately, without requiring any restarts.  All IIS configuration settings are self-described in a schema file that can be accessed by going to \windows\sytem32\inetsrv\config\schema.  Adding new configuration to IIS is as simple as dropping a new schema file in this directory, registering it, and it automatically becomes available through IIS’ cmd-line tool and programmatic APIs.

Distributed Configuration (by default)

Apache supports distributed configuration with a feature called .htaccess.  It is a powerful feature that enables configuration for a Web site to be overriden using a simple text file in the content directory.  Unfortunately, due to the way it is designed in Apache, using it incurrs a huge performance hit.  In fact, the apache.org site recommends you avoid using it whenever possible.

IIS7 supports distributed configuration in web.config files, and has some important advantages over .htaccess.  Web.config is the file that ASP.NET uses today to store configuration, so developers now have a single file, format and API to use to target Web site / app configuration.  Imagine storing your PHP, Apache and Web Application settings in one file.  This distributed configuration support is very powerful, and allows for every per-URL configuration IIS property to be set in distributed configuration.  IIS7 caches web.config data, which avoids the per-request performance hit Apache suffers from.  The IIS implmenetation for distributed config is so good we’ve made it the default for a bunch of IIS configuration that we know developers typically want to set along with their Web sites.  For example, if you use any IIS7 tool to override the default document for a site or application, that setting will be stored in the web.config file for that directory by default.  Of course, you can override the default and store everything in IIS’ global configuration file if you want, and you can decide on a section-by-section basis which settings you want distributed, and which you want to keep centralized.  There is much more granulatiry in IIS’ configuration locking support over Apache, enabling you to even lock at the attribute level if desired.

 

Extensibility (C/C++/C#/VB.NET/and 30+ other languages…)

As I noted above, Apache has had a very modular architecture with powerful extensibility for many years.  Apache’s architecture has allowed many people to take it and add / modify / extend the Web server to do many custom things.  The resulting community modules for Apache has been impressive to watch.   IIS’ ISAPI extensibility hasn’t been a complete slouch: some of the world’s biggest application frameworks have successfuly run on ISAPI, including ASP, ASP.NET, ColdFusion, ActiveState PERL, etc.  Unfortunately, the number of successful ISAPI developers does seem to be smaller than the successful Apache mod developers, and the product team itself elected to rarely use ISAPI to build actual IIS features.

This all changes with IIS7.  With IIS7, IIS introduces a new native extensibility interface, CHttpModule, on top of which we ported all of the IIS features as a discrete, pluggable binary.  The IIS core Web server itself is a very thin event pipeline, and each of the IIS features can now be added and removed independently.  The extensibility point, CHttpModule, is much more powerful than ISAPI, and provides a fully asynchronous super-set support for extensions and filters.  Don’t like how IIS does XYZ feature, rip it out and replace it with your own: you have all the APIs the IIS team has.

Even more impressive, IIS7 introduces managed extensibility of the core Web server via the existing System.Web IHttpModule and IHttpHandler interfaces, enabling any .NET framework developer to extend IIS at the core and build a new, custom or replacement feature.  I showed this off in a recent blog post on how to build a SQL Logging module that can add to or replace the built-in W3C logging using .NET in less than 50 lines of code.

 

Advanced Diagnostics and Troubleshooting support

Whether you’re running IIS or Apache, troubleshooting problems can be a real bear.  Applications running in a high-performance, multi-threaded, console environment are very tough to debug, especially when in production use.  IIS7 innovates in several key ways to make the support for these situations far better than what you see with any other Web server.

First, IIS supports a feature called ‘failed request tracing’, which is really very cool.  Simply give IIS a set of error conditions to watch out for, based on response code or timeout value, and IIS will trap this condition and log a detailed trace log of everything that happened during the request lifetime that led up to the error.  Seeing requests timeout on a periodic basis, but not sure why?  Simply tell IIS to look out for requests that take longer than n seconds to complete, and IIS will show you ever step in the request lifetime, and including duration to complete each step.  And you’ll see the last event to have fired before the timeout to occur.  Are you seeing the dreaded “Server 500 Error – Internal Server Error”?  Tell IIS to trap this error and then browse through each step along the request to see where things went south.  I know of nothing like this with Apache.

IIS also supports real-time request monitoring and runtime data.  Want to know which requests are in flight on the server, how long they have been running, which modules they are in, etc?  IIS can tell you from the cmd-line, administration tool, or even programmatically via .NET and WMI APIs.  It is very easy to now look inside IIS and see what’s going on inside your Server.

Rich Administration APIs and Tools

This is an area where IIS has traditionally shined, and IIS7 takes the lead even further.  IIS7’s new administration tool is very simple and easy to use, but extremely powerful.  It is now feature-focused: simply click on a Web server, site or application and see every feature available to manage.  On the right hand pane there is a set of simple administration tasks for each scope that makes it easy to create new sites and applications, modify logging settings, or see advanced settings.  The administration tool remotes over HTTP, making it possible to manage the server locally or over the internet.  And the tool fully supports the distributed configuration model, making it possible to add ‘delegated’ administrators for Web sites and applications and allowing them to use Web.config or the same Administration tool to configure their Web site.  The administration tool is also completely modular, and built on top of a new extensibility framework, making it easy to add new features into the tool.

In addition to a rich administration tool, IIS also ships AppCmd.exe, a swiss-army knife for cmd-line administration.  With it, you can set any IIS setting, view real-time request and runtime information, and much more.

IIS7 also includes several programmatic interfaces which can be used to manage the server.  Sure, you can use PERL to hack away at the new text-based config file if you want, or you can use rich, object-oriented APIs in any .NET or script language if you prefer.  Microsoft.Web.Administration is a powerful new .NET api for programmatically managing the Server.  IIS7 also includes a new WMI provider for scripting management using VBscript or JScript.

 

Summary

IIS7 is a major overhaul of the Web server.  It builds on the rock-solid security and reliability of IIS6, and promises some very powerful new extensibility and management capabilities that meet and exceed what Apache can do today.  It’s already in Vista, so you can use it on the desktop today, and with Beta 3 it is available for free for production use through the GoLive program.

I’m quite certain this won’t end the debate of which is the better Web server, but I thought I’d add my two cents. 😉

 

Source : http://blogs.iis.net/bills/archive/2007/05/07/iis-vs-apache.aspx

Elance Vs ODesk Review – A Freelancer’s Perspective

As a freelance writer, there is one activity that takes up as much of my time as writing does and that’s looking for work. Once you find work, your next concern is whether or not you’ll get paid. And if I do get paid, how long will it take?

As a freelancer, there are numerous sites to choose from on which you can bid on projects. Two popular sites today are Elance and oDesk. From the homepage, you might think that these two sites are pretty similar. After all, they both state that there’s guaranteed work with guaranteed payment. A freelancer’s dream come true, right?

Let’s compare the two:

Elance Guarantees both Hourly and Fixed Price Work

All fixed price projects on Elance use escrow. Escrow is pretty straightforward and it’s safe for both the buyer and the provider. The buyer funds the account and they release it when the project is completed. If for some reason they forget to release the funds, they are automatically released 30 days later. It might take some time, but you’re guaranteed payment.

Elance is also able to guarantee their hourly projects as well. This is done by the provider using Tracker with Work View. Essentially Work View takes screenshots as you work on a project and hours billed must correspond. Your hours are also automatically paid when timesheets are sent, unless a client identifies certain hours as not being related to the project.

I don’t typically work on an hourly basis; it’s still good to know that your hourly work is guaranteed. And since 99% of my projects use escrow, I like having the security of escrow.

oDesk Only Guarantees Hourly Work

Although oDesk does guarantee that you will receive payment for hourly work (and it is tracked in a similar fashion to Elance), they don’t have escrow. This makes oDesk great for providers who do work on an hourly basis, but if you work on a project by project basis, there’s no escrow system to guarantee your payment.

Communicating through Elance

Elance offers a variety of tools to assist you in communicating with your clients as well as ensuring that the project flows smoothly. Once you are awarded a project, you have access to a Private Message Board with Real Time Chat, File Sharing with Version Control, Project Terms with Milestones and Comments, Status Reports and Timesheets, Autopay on hourly projects and Escrow for fixed price projects.

All of these tools allow you to document the project and all details associated with it. You can discuss the project prior to the award and after on the private message board; business terms are then set up with the necessary milestones and escrow is funded. Throughout the entire process, everything is documented so you can refer back to any messages to ensure you’re both on the same page.

Communicating through oDesk

oDesk offers the Work Diary, which tracks the amount of time you spend working on a project, but that’s about it. You don’t have the many tools that Elance offers you to help you work. In fact, there’s not even a private message board. Communication must occur through personal email, telephone or chat, whichever the provider and buyer agree upon. In some ways this is easier, however, emails can get lost or go to SPAM boxes, chats get turned off and phone calls can be missed. And there’s no communal workspace where all the communications are tracked. There’s ample opportunity for miscommunication here.

Quality of Projects

As a provider, the last thing you want to do is spend hours communicating with a potential buyer, who may not even have a whole lot of potential. Emails, phone calls, and chats all take time that you could be spending on your current projects or looking for serious projects. One way that Elance ensures quality projects is by testing the committal level of buyers. They do this by charging a $10 activation fee to ensure that the buyer has a legitimate form of payment to pay providers for the work they perform. This must be completed before the buyer can post any projects.

On oDesk, buyers can post as many projects as they want and communicate with all of the providers that they want and never even award a single project. They advertise that it’s “free to post jobs and interview contractors,” but that’s not necessarily a good thing. There’s nothing worse than talking with a lot of buyers who are just testing the waters and never result in a paying project.

Granted, there is nothing that requires a buyer to award a project on Elance, but it seems to attract a higher quality buyer and higher quality projects.

Another indication of quality of jobs are the budgets allowed by both websites. Elance has a minimum bid of $50, while oDesk actually has an estimated budget level of $5! Unfortunately, there are a few projects that most providers can do for only $5 and quality providers charge more than $5 per hour.

Conclusion

I am certain that there are numerous providers on oDesk that are doing well for themselves and that’s great. However, from my perspective and I’ve been doing this for several years, oDesk just doesn’t provide the level of security that I need as a provider. If my business is to be successful, then I have to know that I’m protected through the site that I choose to work through and pay a membership to. Elance offers me that security. Sure, there are times when projects go awry and I don’t always come out on the better end of the deal and I may lose money, but at least I know that the decision we came to is a fair one for both provider and buyer.

Escrow is the main focus of importance for me and the support system that surrounds it. Anyone that is venturing out into the freelance marketplace, either as a designer, writer, administrative assistant or other consultant, should consider the payment security system of any website they choose to work through long and hard. Sure, you can always charge half up front and half upon completion, but there’s no guarantee that you’re going to get that second half. Escrow ensures you get all that you’re due.

Words You Want is your one stop resource for SEO ghostwriting and eBook writing. Words You Want offers a variety of SEO writing services, pre-written ebooks in the eBooks To-Go store, linkbuilding, social media packages, SEO pacakges and more. Visit WordsYouWant.com and watch our animated videos to learn more about how Words You Want can help you with your online marketing campaigns and SEO.

Article Source: http://EzineArticles.com/?expert=Valerie_Mellema

Elance vs oDESK — another perspective

Outsourcing – Odesk Experiences vs. Elance

image After having a few bad freelance experiences (details here) with Elance.com I decided to look elsewhere for outsourcing certain web development tasks and research.

I have been using Odesk for a few weeks now and I can honestly say I like the service, the features, and the providers. The first thing I did when I created my account  was to sign up as a provider. I wanted to see what types of hoops a provider had to jump through in order to be listed.  The very first thing you must do is take a basic functionality and usability type of exam to ensure that you understand how Odesk functions. This was a painless although slightly annoying process but necessary in order to ensure basic understanding. Additionally, Odesk had other exams listed and recommended that I tak ea few in order to bolster confidence for my potential employer.

Exams

imageOdesk also offers internal testing and self evaluation for certain skills. Completing one of these exams adds ratings to your profile and I can tell you from experience that they aren’t a cake walk. Additionally, these are timed exams so you cant fake it by trying to Google the answers because you will likely run out of time. I consider myself knowledgeable in the network and systems security arena and I must say that the network security exam wasn’t easy. I expected it to have good questions but not tough ones and I would consider them to be a solid prescreening qualifier. However, there are those rare individuals that can pass any exam but have no practical experience and this is where the interview process and work portfolio comes into play.

Screening Candidates

imageAvoid the temptation to hire the candidate with the absolute lowest price. In my experience you will not be happy with the deliverable or perhaps an equally important factor, communication. Candidates with the lowest price may be trying to get a foot in the door and establish a reputation or they might just be cheap because they lack sufficient skills and you are paying them to learn. You can get good deals and find quality candidates at really low rates but it will be a gamble. Instead of price alone make sure you look at the cover letter, number of Odesk hours and feedback score. Now, regarding the feedback score, be sure to look at the total number of feedback entries vs. the score average. One or two entries may not be enough to provide the necessary assurances.

Payment

As a provider you will need to setup a credit card for automatic payment to your project providers. Payments for hourly services occur based upon the following schedule :

Monday

The work week begins at 12 a.m. GMT

Sunday

The work week ends at 11:59 p.m. GMT

The provider receives his/her timelog for review and is responsible for making sure it is accurate.

Any offline time should be added

All non-work time should be removed

Monday

The deadline for the Work Dairy is Monday 12 p.m. (Noon) GMT. At that time, the final timelog is sent to the Buyer for review and the dispute period begins.

The Buyer sees $X in Pending Debit

The Provider sees $Y in Pending Credit

Wednesday

The review period ends Wednesday evening, PST

Thursday

The Buyer’s invoice is now due.

Buyer will see a negative Balance in the “Your Balance” box on the top left of Provider Console

Buyer’s credit card is charged Thursday evening

Next Wednesday

The provider’s earnings become available.

Provider will see a positive Balance in “Your Balance” box on the top left of Provider Console

After Security Period has passed, provider can withdraw balance.

Summary

This post barely touches the surface of Odesk and the associated benefits as well as the vast number of features. So far I have enjoyed using the Odesk service and the detailed reporting features. I intend to continue documenting my experiences with Odesk as I learn more. Additionally, I will be reviewing other services as I become aware of them. If you have any service that you would recommend or any experience that you would like to share (positive or negative) along the lines of outsourcing services please feel free.

The On Demand Global Workforce - oDesk

My WordPress Custom Form Plug-in

I developed a Custom Form Plug-in to work in every versions of WordPress.  Here at WorPress blog, I cannot install it because of some restrictions. But it works in my personal web site.

This plug-in is sold and implemented by me. It’s a plugin which we can insert edit delete sort and select every type of data

Below are its snapshots :

 

 

This is the Admin Console of the plug-in. It can be restricted to users according to WordPress user security levels.

 

Conversion Optimization of your WebSite

In internet marketing, conversion optimization, or conversion rate optimization is the science and art of creating an experience for a website visitor with the goal of converting the visitor into a customer. It is also commonly referred to as CRO.

Web origins

Conversion optimization was born out of the need of lead generation and ecommerce internet marketers to improve their website’s results. As competition grew on the web during the early 2000s, Internet marketers had to become more measurable with their marketing tactics. They began experimenting with website design and content variations to determine which layouts, copy text, offers and images will improve their conversion rate. Many practitioners have contributed to the field, including Bryan and Jeffrey Eisenberg, Avinash Kaushik, Anne Holland, Tim Ash, Ayat Shukairy, Jonathan Mendez, Khalid Saleh, Chris Goward, Keith Hagen, Jon Correll and Zack Linford.

Why conversion optimization

Frequently, when marketers target a pocket of customers that has shown spectacular lift in an ad campaign, they belatedly discover the behavior is not consistent. Online marketing response rates fluctuate widely from hour to hour, segment to segment and offer to offer.

This phenomenon can be traced to the difficulty humans have separating chance events from real effects. Using the haystack process, at any given time marketers are limited to examining and drawing conclusions from small samples of data. However, psychologists (led by Kahneman and Tversky) have extensively documented tendencies which find spurious patterns in small samples, thereby explaining why poor decisions are made. Therefore, statistical methodologies can be leveraged to study large samples and mitigate the urge to see patterns where none exists.

These methodologies, or “conversion optimization” methods, are then taken a step further to run in a real-time environment. The real-time data collection and subsequent messaging as a result, increases the scale and effectiveness of the online campaign.

How conversion optimization works

Conversion Rate Optimization is the process of increasing website leads and sales without spending money on attracting more visitors by reducing your visitor “bounce rate”. Some test methods enable one to monitor which headlines, images and content help one convert more visitors into customers.

There are several approaches to conversion optimization with two main schools of thought prevailing in the last few years. One school is more focused on testing as an approach to discover the best way to increase a website, a campaign or a landing page conversion rates. The other school is focused more on the pretesting stage of the optimization process. In this second approach, the optimization company will invest a considerable amount of time understanding the audience and then creating a targeted message that appeals to that particular audience. Only then willing to deploy testing mechanisms to increase conversion rates. The article “a case against multi-variant testing” outlines some of the reasons testing should not be the only component in conversion optimization work.

Elements of the test focused approach to conversion optimization

Conversion optimization platforms for content, campaigns and delivery, then need to consist of the following elements:

Data collection and processing

The platform must process hundreds of variables and automatically discover which subsets have the greatest predictive power, including any multivariate relationship. A combination of pre- and post-screening methods is employed, dropping irrelevant or redundant data as appropriate. A flexible data warehouse environment accepts customer data as well as data aggregated by third parties. Data can be numeric or text-based, nominal or ordinal. Bad or missing values are handled gracefully. Data should be geographic, contextual, frequency, demographic, behavioral, customer, etc.

Optimization goals

The official definition of “optimization” is the discipline of applying advanced analytical methods to make better decisions. Under this framework, business goals are explicitly defined and then decisions are calibrated to optimize those goals. The methodologies have a long record of success in a wide variety of industries, such as airline scheduling, supply chain management, financial planning, military logistics and telecommunications routing. Goals should include maximization of conversions, revenues, profits, LTV or any combination there.

Business rules

Arbitrary business rules must be handled under one optimization framework. Some typical examples include:

  • Minimum (or maximum) weights for specific offers
  • “Share of voice” among all offers
  • Differential eligibility for different offers
  • Mutually exclusive offers
  • Bundled offers
  • Specified holdout sample

Such a platform should understand these and other business rules, then adapting targeting rules accordingly.

Real-time decision making

Once mathematical models have been built, ad/content servers use an audience screen method to place visitors into segments and select the best offers, in real time. Business goals are optimized while business rules are enforced simultaneously. Mathematical models can be refreshed at any time to reflect changes in business goals or rules.

Statistical learning

Ensuring results are repeatable by employing a wide array of statistical methodologies. Variable selection, validation testing, simulation, control groups and other techniques together help to distinguish true effects from chance events. A champion/challenger framework ensures that the best mathematical models are deployed always. In addition, performance is enhanced by the ability to analyze huge datasets and to retain historical learning.

See also

Source : Wikipedia  ( http://en.wikipedia.org/wiki/Conversion_optimization )