Think back to your Internet habits a few years ago (hello, MySpace). Does everything seem different now?
That's probably because it is. 
A decade ago, the Internet was a completely different place. Facebook had just made its debut and our precious Twitter was about to be born. Since then, they and other popular websites have been revamped and tricked out to satisfy the eyes and needs of the immense traffic they draw. 
Check out this infographic made by Robert Morris at NinjaEssays, which compares what some of our favorite websites looked like in their first iterations with how they look now.
Call it the yearbook of websites. (And Google is definitely that one person who never ages.)
What Popular Websites Used to Look Like
IMAGE: ROBERT MORRIS/NINJAESSAYS
Have something to add to this story? Share it in the comments.
Show Comments
Hillary Clinton has finally spoken out about the controversy that has dogged her for a week since it came to light that she used a personal email server for her activities as secretary of state.
Speaking at a press conference Tuesday, Clinton implied that she prioritized her personal convenience, since she didn't want to carry two devices on her: one for personal email, and one for her official business
"It would've been better for me to use two separate phones and two email accounts," she said. "I thought using one device would be simpler, and obviously, it hasn't worked out that way."
A dual-device existence is familiar to almost anyone who had a professional career in the 2000s. After the rise of the iPhone and the phenomenon of "bring your device to work" took over, most people migrated to a single-device solution, where work and personal messaging coexist, not only on the same device, but sometimes in the same app.
Clinton's run as secretary of state began in 2009. Even back then, most smartphones could accommodate more than one email account, so Clinton's excuse for not wanting two devices for two email addresses may not ring true to many people.
However, the question of multiple accounts is separate from that of security. The reason many people needed to carry around multiple devices is because work and personal data co-mingling on the same device is frowned upon by most IT departments. There's a real concern of attack vectors on the personal side (where strict adherence to best security practices is rare), but there's also the concern that IT would suddenly have dominion over all your personal data, with the ability to read it, secure it and even wipe it at will.
Most people don't want that, and honestly, your IT department doesn't either — they don't want to waste their time policing your downloading of games and looking at porn on your own time. Hence the two-device situation.
It's only in recent years that mobile devices have addressed the multiple-device issue with more sophisticated solutions. On iPhones and some Android devices, many IT departments have been content to simply manage specific messaging apps like Good (or use Microsoft Exchange), with varying levels of security.
Clinton famously carried a BlackBerry, the device of choice for government work because of its emphasis on security. BlackBerry engineered BlackBerry 10 (which debuted in 2013, just as Clinton left office) to specifically address the two-device problem with a feature called BlackBerry Balance. BlackBerry 10 is natively able to accommodate not just two email accounts, but two entirely different device profiles. Normally it's your personal phone, but it's always a passcode away from becoming your work phone. You still get notifications whether you're logged in or not, but you can't see the contents of those alerts until you log in.
BlackBerry actually keeps the two profiles separate at the chip level, meaning there is no possibility of co-mingling data, and one can be remote-wiped while leaving the other alone.
Prior to 2013, though, there was no standard way to secure a BlackBerry like Clinton's with two email accounts, at least not without giving the IT person in charge complete dominion over all the data on the phone. To fulfill the criteria that Clinton demanded — secure email that's not sitting on a cloud service, plus a single-BlackBerry solution — she had just one option: Set up her own email server.
Politically, it was probably not the best choice. But for what she wanted to do, it makes total technical sense.
Have something to add to this story? Share it in the comments.


Olery was founded almost 5 years ago. What started out as a single product (Olery Reputation) developed by a Ruby development agency grew into a set of different products and many different applications as the years passed. Today we have not only Reputation as a product but also Olery Feedback, the Hotel Review Data APIwidgets that can be embedded on a website and more products/services in the near future.
We’ve also grown considerably when it comes to the amount of applications. Today we deploy over 25 different applications (all Ruby), some of these are web applications (Rails or Sinatra) but most are background processing applications.
While we can be extremely proud of what we have achieved so far there was always something lurking in the dark: our primary database. From the start of Olery we’ve had a database setup that involved MySQL for crucial data (users, contracts, etc) and MongoDB for storing reviews and similar data (essentially the data we can easily retrieve in case of data loss). While this setup served us well initially we began experiencing various problems as we grew, in particular with MongoDB. Some of these problems were due to the way applications interacted with the database, some were due to the database itself.
For example, at some point in time we had to remove about a million documents from MongoDB and then re-insert them later on. The result of this process was that the database went in a near total lockdown for several hours, resulting in degraded performance. It wasn’t until we performed a database repair (using MongoDB’s repairDatabasecommand). This repair itself also took hours to complete due to the size of the database.
In another instance we noticed degraded performance of our applications and managed to trace it to our MongoDB cluster. However, upon further inspection we were unable to find the actual cause of the problem. No matter what metrics we installed, tools we used or commands we ran we couldn’t find the cause. It wasn’t until we replaced the primaries of the cluster that performance returned back to normal.
These are just two examples, we’ve had numerous cases like this over time. The core problem here wasn’t just that our database was acting up, but also that whenever we’d look into it there was absolutely no indication as to what was causing the problem.

The Problem Of Schemaless

Another core problem we’ve faced is one of the fundamental features of MongoDB (or any other schemaless storage engine): the lack of a schema. The lack of a schema may sound interesting, and in some cases it can certainly have its benefits. However, for many the usage of a schemaless storage engine leads to the problem of implicit schemas. These schemas aren’t defined by your storage engine but instead are defined based on application behaviour and expectations.
For example, you might have a pages collection where your application expects a title field with a type of string. Here the schema is very much present, although not explicitly defined. This is problematic if the data’s structure changes over time, especially if old data is not migrated to the new structure (something that is quite problematic in schemaless storage engines). For example, say you have the following Ruby code:
post_slug = post.title.downcase.gsub(/\W+/, '-')
This will work for every document that has a title field that returns a String. This will break for documents that use a different field name (e.g. post_title) or simply don’t have a title-like field. To handle such a case you’d need to adjust the code as following:
if post.title
  post_slug = post.title.downcase.gsub(/\W+/, '-')
else
  # ...
end
Another way of handling this is defining a schema in your models. For example, Mongoid, a popular MongoDB ODM for Ruby, lets you do just that. However, when defining a schema using such tools one should wonder why they aren’t defining the schema in the database itself. Doing so would solve another problem: re-usability. If you only have a single application then defining a schema in the code is not really a big deal. However, when you have dozens of applications this quickly becomes one big mess.
Schemaless storage engines promise to make your life easier by removing the need to worry about a schema. In reality these systems simply make it your own responsibility to ensure data consistency. In certain cases this might work out, but I’m willing to bet that for most this will only backfire.

Requirements Of A Good Database

This brings me to the requirements of a good database, more specifically the requirements Olery has. When it comes to a system, especially a database, we value the following:
  1. Consistency.
  2. Visibility of data and the behaviour of the system.
  3. Correctness and explicitness.
  4. Scalability.
Consistency is important as it helps setting clear expectations of a system. If data is always stored in a certain way then systems using this data become much simpler. If a certain field is required on database level an application doesn’t need to check for the existence of such a field. A database should also be able to guarantee the completion of certain operations, even under high pressure. There’s nothing more frustrating than inserting data only for it not to appear until after a few minutes.
Visibility applies to two things: the system itself and how easy it is to get data out of it. If a system misbehaves it should be easy to debug. In turn, if a user wants to query data this should be easy too.
Correctness means that a system behaves as expected. If a certain field is defined as an numeric value one shouldn’t be able to insert text into the field. MySQL is notoriously bad at this as it lets you do exactly that and as a result you can end up with bogus data.
Scalability applies to not only performance, but also the financial aspect and how well a system can deal with changing requirements over time. A system might perform extremely well, but not at the cost of large quantities of money or by slowing down the development cycle of systems depending on it.

Moving Away From MongoDB

With the above values in mind we set out to find a replacement for MongoDB. The values noted above are often a core set of features of traditional RDBMS’ and so we set our eyes on two candidates: MySQL and PostgreSQL.
MySQL was the first candidate as we were already using it for some small chunks of critical data. MySQL however is not without its problems. For example, when defining a field as int(11) you can just happily insert textual data and MySQL will try to convert it. Some examples:
mysql> create table example ( `number` int(11) not null );
Query OK, 0 rows affected (0.08 sec)

mysql> insert into example (number) values (10);
Query OK, 1 row affected (0.08 sec)

mysql> insert into example (number) values ('wat');
Query OK, 1 row affected, 1 warning (0.10 sec)

mysql> insert into example (number) values ('what is this 10 nonsense');
Query OK, 1 row affected, 1 warning (0.14 sec)

mysql> insert into example (number) values ('10 a');
Query OK, 1 row affected, 1 warning (0.09 sec)

mysql> select * from example;
+--------+
| number |
+--------+
|     10 |
|      0 |
|      0 |
|     10 |
+--------+
4 rows in set (0.00 sec)
It’s worth noting that MySQL will emit a warning in these cases. However, since warnings are just warnings they are often (if not almost always) ignored.
Another problem with MySQL is that any table modification (e.g. adding a column) will result in the table being locked for both reading and writing. This means that any operation using such a table will have to wait until the modification has completed. For tables with lots of data this could take hours to complete, possibly leading to application downtime. This has lead companies such as SoundCloud to develop tools such as lhm to deal with this.
With the above in mind we started looking into PostgreSQL. PostgreSQL does a lot of things well that MySQL doesn’t. For example, you can’t insert textual data into a numeric field:
olery_development=# create table example ( number int not null );
CREATE TABLE

olery_development=# insert into example (number) values (10);
INSERT 0 1

olery_development=# insert into example (number) values ('wat');
ERROR:  invalid input syntax for integer: "wat"
LINE 1: insert into example (number) values ('wat');
                                             ^
olery_development=# insert into example (number) values ('what is this 10 nonsense');
ERROR:  invalid input syntax for integer: "what is this 10 nonsense"
LINE 1: insert into example (number) values ('what is this 10 nonsen...
                                             ^
olery_development=# insert into example (number) values ('10 a');
ERROR:  invalid input syntax for integer: "10 a"
LINE 1: insert into example (number) values ('10 a');
PostgreSQL also has the capability of altering tables in various ways without requiring to lock it for every operation. For example, adding a column that does not have a default value and can be set to NULL can be done quickly without locking the entire table.
There are also various other interesting features available in PostgreSQL such as: trigram based indexing and searching, full-text search, support for querying JSON, support for querying/storing key-value pairs, pub/sub support and more.
Most important of all PostgreSQL strikes a balance between performance, reliability, correctness and consistency.

Moving To PostgreSQL

In the end we decided to settle with PostgreSQL for providing a balance between the various subjects we care about. The process of migrating an entire platform from MongoDB to a vastly different database is no easy task. To ease the transition process we broke this process up in roughly 3 steps:
  1. Set up a PostgreSQL database and migrate a small subset of the data.
  2. Update all applications that rely on MongoDB to use PostgreSQL instead, along with whatever refactoring is required to support this.
  3. Migrate production data to the new database and deploy the new platform.

Migrating a Subset

Before we would even consider migrating all our data we needed to run tests using a small subset of the final data. There’s no point in migrating if you know that even a small chunk of data is going to give you lots of trouble.
While there are existing tools that can handle this we also had to transform some data (e.g. fields being renamed, types being different, etc) and as such had to write our own tools for this. These tools were mostly one-off Ruby scripts that each performed specific tasks such as moving over reviews, cleaning up encodings, correcting primary key sequences and so on.
The initial testing phase didn’t reveal any problems that might block the migration process, although there were some problems with some parts of our data. For example, certain user submitted content wasn’t always encoded correctly and as a result couldn’t be imported without being cleaned up first. Another interesting change that was required was changing the language names of reviews from their full names (“dutch”, “english”, etc) to language codes as our new sentiment analysis stack uses language codes instead of full names.

Updating Applications

By far most time was spent in updating applications, especially those that relied heavily on MongoDB’s aggregation framework. Throw in a few legacy Rails applications with low test coverage and you have yourself a few weeks worth of work. The process of updating these applications was basically as following:
  1. Replace MongoDB driver/model setup code with PostgreSQL related code
  2. Run tests
  3. Fix a few tests
  4. Run tests again, rinse and repeat until all tests pass
For non Rails applications we settled on using Sequel while we stuck with ActiveRecord for our Rails applications (at least for now). Sequel is a wonderful database toolkit, supporting most (if not all) PostgreSQL specific features that we might want to use. Its query building DSL is also much more powerful compared to ActiveRecord, although it can be a bit verbose at times.
As an example, say you want to calculate how many users use a certain locale along with the percentage of every locale (relative to the entire set). In plain SQL such a query could look like the following:
SELECT locale,
count(*) AS amount,
(count(*) / sum(count(*)) OVER ()) * 100.0 AS percentage

FROM users

GROUP BY locale
ORDER BY percentage DESC;
In our case this would produce the following output (when using the PostgreSQL commandline interface):
 locale | amount |        percentage
--------+--------+--------------------------
 en     |   2779 | 85.193133047210300429000
 nl     |    386 | 11.833231146535867566000
 it     |     40 |  1.226241569589209074000
 de     |     25 |  0.766400980993255671000
 ru     |     17 |  0.521152667075413857000
        |      7 |  0.214592274678111588000
 fr     |      4 |  0.122624156958920907000
 ja     |      1 |  0.030656039239730227000
 ar-AE  |      1 |  0.030656039239730227000
 eng    |      1 |  0.030656039239730227000
 zh-CN  |      1 |  0.030656039239730227000
(11 rows)
Sequel allows you to write the above query using plain Ruby without the need of string fragments (as ActiveRecord often requires):
star = Sequel.lit('*')

User.select(:locale)
    .select_append { count(star).as(:amount) }
    .select_append { ((count(star) / sum(count(star)).over) * 100.0).as(:percentage) }
    .group(:locale)
    .order(Sequel.desc(:percentage))
If you don’t like using Sequel.lit('*') you can also use the following syntax:
User.select(:locale)
    .select_append { count(users.*).as(:amount) }
    .select_append { ((count(users.*) / sum(count(users.*)).over) * 100.0).as(:percentage) }
    .group(:locale)
    .order(Sequel.desc(:percentage))
While perhaps a bit more verbose both of these queries make it easier to re-use parts of them, without having to resort to string concatenation.
In the future we might also move our Rails applications over to Sequel, but considering Rails is so tightly coupled to ActiveRecord we’re not entirely sure yet if this is worth the time and effort.

Migrating Production Data

Which finally brings us to the process of migrating the production data. There are basically two ways of doing this:
  1. Shut down the entire platform and bring it back online once all data has been migrated.
  2. Migrate data while keeping things running.
Option 1 has one obvious downside: downtime. Option 2 on the other hand doesn’t require downtime but can be quite difficult to deal with. For example, in this setup you’d have to take into account any data being added whileyou’re migrating data as otherwise you’d lose data.
Luckily Olery has a rather unique setup in that most write operations to our database only happen at fairly regular intervals. The data that does change more often (e.g. user and contract information) is a rather small amount of data meaning it costs far less time to migrate compared to our review data.
The basic flow of this part was:
  1. Migrate critical data such as users, contracts, basically all the data that we can not afford to lose in any way.
  2. Migrate less critical data (data that we can re-scrape, re-calculate, etc).
  3. Test if everything is up and running on a set of separate servers.
  4. Switch the production environment to these new servers.
  5. Re-migrate the data of step 1, ensuring data that was created in the mean time is not lost.
Step 2 took the longest by far, roughly 24 hours. On the other hand, migrating the data mentioned in steps 1 and 5 only took about 45 minutes.

Conclusion

It’s now been almost a month ago since we completed our migration and we are extremely satisfied so far. The impact so far has been nothing but positive and in various cases even resulted in drastically increased performance of our applications. For example, our Hotel Review Data API (running on Sinatra) ended up having even lower response timings than before thanks to the migration:
Review Data API Performance
The migration took place on the 21st of January, the big peak is simply the application performing a hard restart (leading to slightly slower response timings during the process). After the 21st the average response time was nearly cut in half.
Another case where we saw a big increase in performance was what we call the “review persister”. This application (running as a daemon) has a rather simple purpose: to save review data (reviews, review ratings, etc). While we ended up making some pretty big changes to this application for the migration the result was very rewarding:
Review Persister Performance
Our scrapers also ended up being a bit faster:
Review Collector Performance
The difference isn’t as extreme as with the review persister, but since the scrapers only use a database to check if a review exists (a relatively fast operation) this isn’t very surprising.
And last the application that schedules the scraping process (simply called the “scheduler”):
Scheduler Performance
Since the scheduler only runs at certain intervals the graph is a little bit hard to understand, but nevertheless there’s a clear drop in the average processing time after the migration.
In the end we’re very satisfied with the results so far and we certainly won’t miss MongoDB. The performance is great, the tooling surrounding it pales other databases in comparison and querying data is much more pleasant compared to MongoDB (especially for non developers). While we do have one service (Olery Feedback) still using MongoDB (albeit a separate, rather small cluster) we intend to also migrate this to PostgreSQL in the future.


The primary product of the OpenBSD project is the OpenBSD operating system, but sometimes other artifacts are produced as byproducts. Avant-garde web site design, funny email threads. Also, reusable code that can be beneficial to other developers, outside the strict confines of OpenBSD.
Unfortunately, sometimes this code doesn’t see the widest distribution. Often this can be the result of Not Invented Here syndrome, though other times it takes the appearance of a more pernicious problem. Invented by OpenBSD.
It was brought to my attention that NetBSD recently imported two OpenBSD functions, but reimplemented them in such a way as to be dangerously incompatible.

reallocarray

reallocarray was introduced in OpenBSD to reduce (though not solve) the problem of integer overflows. More about reallocarray in OpenBSD. The OpenBSD version is pretty short. And, as the initial commit says, it’s in a separate file so others can avoid reinventing the wheel.
Turns out reallocarray is pretty good at preventing integer overflows. So good, in fact, NetBSD imported it to fix an overflow in the regex library. This bug had been fixed in OpenBSD some months prior as part of a general sweep to replace all suspect looking allocations.
But then NetBSD decided to improve their manpage by adding an incorrect statement about zero sized allocations. The OpenBSD man page is quite clear:

If size or nmemb is equal to 0, a unique pointer to an access protected, zero sized object is returned.
There is no ambiguity. In fact, the initially imported implementation worked the same way on NetBSD. And so, in order to prevent anyone from pointing out that all implementations work the same way, NetBSD was forced to change their implementation. First, they added a reallocarr function with different semantics. Then they changed reallocarray to be a wrapper around that function, with newly incompatible semantics.
What’s different? With a zero sized allocation, the new and “improved” reallocarray will free the input pointer (something the original never did) and return NULL. At which time the caller will free the pointer again. That’s bad.
Was it really worth replacing one ten line function with another ten line function just to be different?
Oh, hey, look, reallocarray was fixed to behave like OpenBSD. Now it’s a 13 line replacement for a 10 line function. But at least it doesn’t have any code written by an OpenBSD developer.

strtonum

strtonum was introduced in OpenBSD a while back to solve a more mundane problem, so there aren’t any cool blog posts showing you how to use it. I did, however, write up some notes about the design of strtonum when I got tired of people misunderstanding it. The OpenBSD version has some subtleties, but you can generally work out what it does. Or, as a last resort, read the man page.
NetBSD also imported this function, but again as a wrapper around another function, strtoi. I’ll spare you the trouble of looking up what strtoi does and paste the code here:

__TYPE INT_FUNCNAME(, _FUNCNAME, _l)(const char * __restrict nptr, char ** __restrict endptr, int base, __TYPE lo, __TYPE hi, int * rstatus, locale_t loc) { return INT_FUNCNAME(_int_, _FUNCNAME, _l)(nptr, endptr, base, lo, hi, rstatus, loc); }
In case you were wondering, yes, you can read the NetBSD man page for this function, but that would be incorrect. The man page text was copied from OpenBSD and describes the behavior of original implementation, not the behavior of the function on NetBSD.
And as with reallocarray, it’s not like strtoi had any precedent. It was introduced in the same commit as strtonum. But why have one function when you can have two?

portability

The OpenBSD implementations of reallocarray and strtonum are deliberately designed to only depend on other standard C functions. The functions themselves are not widely portable, but the implementations are, so that developers are free to take the code and incorporate it. You aren’t required to also take a pile of other code because they aren’t wrappers built upon wrappers. The implementations are expressly permissively licensed to make this even easier, because multiple incompatible implementations damages the ecosystem.
Oh, and it’s not like this hasn’t happened before. Want to know a good way to get ssh to segfault? Pick a string function used by OpenSSH and add it to your libc, but reverse the arguments.
Long ago, the tactic used to keep OpenBSD functions like strlcpy out of the ecosystem was to simply not import them. But eventually, that tactic was overrun by all of the software including its own copy. The new tactic seems to be introducing incompatible versions of the same functions, so that nobody knows which ones are safe to use.
phishing-google-apps
A critical vulnerability has been discovered in the Google Apps for Work that allows hackers to abuse any website’s domain name based email addresses, which could then be used to send phishing emails on company’s behalf in order to target users.

If you wish to have an email address named on your brand that reads like admin@yourdomain.com instead ofmyemail@gmail.com, then you can register an account with Google Apps for Work.

The Google Apps for Work service allows you to use Gmail, Drive storage, Calendar, online documents, video Hangouts, and other collaborative services with your team or organization.

To get a custom domain name based email service from Google, one just need to sign up like a normal Gmail account. Once created, you can access your domain’s admin console panel on Google app interface, but can not be able to use any service until you get your domain verified from Google.

SENDING PHISHING MAILS FROM HIJACKED ACCOUNTS
Cyber security researchers Patrik fehrenbach and Behrouz sadeghipour found that an attacker can register any unused (not previously registered with Google apps service) domain, example: bankofanycountry.com with Google apps for Work to obtain 'admin@bankofanycountry.com' account.

But obviously, Google would not let you access email service for 'admin@bankofanycountry.com', until domain verification has been completed, which means neither you can send any email from that account, nor you can receive.

However, the duo explained The Hacker News that there is a page on Google apps that allows domain admin to send 'Sign in Instructions' to the organization users i.e. info@bankofanycountry.com (must be created from panel before proceeding) by accessing following URL directly on the browser.
https://admin.google.com/EmailLoginInstructions?userEmail=info@bankofanycountry.com
Using the compose email interface, as shown, an attacker could send any kind of phishing email containing malicious link to the target users, in an attempt to trick them into revealing their personal information including passwords, financial details or any other sensitive information.
BEFORE SECURITY PATCH
As shown below, researchers successfully obtained admin@vine.com (acquired by Twitter) and send a mail to victim, contains a subject: Welcome to Twitter, which can convince users into submitting their Twitter credentials to the given phishing pages.
google-gmail-hacking
 Researchers reported this security and privacy issue to the search engine giant, and the company has applied, what I think, a partial patch to the flaw. As, it is still allowing an attacker to access ‘Send Sign in Instructions’ for unverified domains, but this time via apps-noreply@google.com, instead of the custom email address.
In an email conversation, Behrouz told The Hacker News, "Google believes that showing the sender as apps-noreply is good enough."
AFTER SECURITY PATCH
But, the consequences are still the same because it won’t stop hackers from targeting victims.
Google vulnerability to Send Phishing Emails
Generally, Google automatically helps identify spam and suspicious emails and mark them as spam or phishing warnings, like they're from a legitimate source, such as your bank or Google, but they're not.

However, by abusing above Google vulnerability, hackers could send phishing emails right into your inbox with no warning as the email has been generated from Google’s own servers.

whatsapp-calling-feature-invite
While WhatsApp is very reserved to its new calling feature, cyber scammers are targeting WhatsApp users across the world by circulating fake messages inviting users to activate the new 'WhatsApp calling feature for Android' that infects their smartphones with malicious apps.

If you receive an invitation message from any of your friend saying, "Hey, I’m inviting you to try WhatsApp Free Voice Calling feature, click here to activate now —> http://WhatsappCalling.com"BEWARE! It is a Scam.

The popular messaging app has begun rolling out its much-awaited Free Voice Calling feature — similar to other instant messaging apps like Skype and Viber — to Android users which allows users to make voice calls using Internet.

However, for now, the free WhatsApp calling feature is invite-only and only appears to work for people running the latest version of WhatsApp app for Android on a Google Nexus 5 phone running the latest Android 5.0.1 Lollipop.

HOW TO ENABLE WHATSAPP CALLING FEATURE
Company has not announced the WhatsApp calling feature officially, but some users claim to have used it. The report broke two months ago, when a Reddit user (pradnesh07) from India reported that the WhatsApp calling feature was activated on his Android device after he received a WhatsApp voice call from a friend. The user also posted its image on the discussion forum.

Because it’s invite only, what we all believe, Millions of users across the world are eagerly waiting to access the free voice calling feature on WhatsApp and searching over the Internet that How to enable WhatsApp calling feature for Android or iOS, and this is what scammers are taking advantage of.

Cyber scammers have allegedly started circulating fake invitations containing malicious links through Social Media, phishing emails, WhatsApp messages and Scam websites in order to spread creepy malware and adware apps.

Once users click on the link, they land to another website where they are asked to take a survey on behalf of WhatsApp. The survey forces users to download unknown applications and software that might contain malware.
whatsapp-calling-feature-invite-for-android
With more than 70 million users, WhatsApp is the widely popular and preferred chat service worldwide, both for us as well as scammers.

LEARN HOW TO PROTECT YOURSELF
In order to protect yourself from 'WhatsApp calling feature' scam, you need to learn that at time of writing:
  • WhatsApp calling feature feature is currently available for Android Lollipop 5.0 version and was successfully accessible via the new version 2.11.508 of the WhatsApp.
  • WhatsApp calling feature feature is still in the beta version.
  • WhatsApp calling feature is not available through Google Play Store, but can be downloaded only from the official WhatsApp website on INVITE.

xiaomi-malware
Recently a mobile-security firm Bluebox claimed that the brand new Xiaomi Mi4 LTE comes pre-installed with spyware/adware and a "forked" vulnerable version of Android operating system on top of it, however, the company denies the claim.

Xiaomi, which is also known as Apple of China, provides an affordable and in-budget smartphones with almost all features that an excellent smartphone provides.

On 5th March, when Bluebox researchers claimed to have discovered some critical flaws in Mi4 LTE smartphone, Xiaomi issued a statement to The Hacker News claiming that "There are glaring inaccuracies in the Bluebox blog post" and that they are investigating the matter.

RESEARCHERS GET TROLLED BY CHINESE SELLERS
Now, Xiaomi responded to Bluebox Labs by preparing a lengthy denial to their claims and said the new Mi4 smartphone purchased by Bluebox team in China (known as the birthplace of fake smartphones) was not an original Xiaomi smartphone but a counterfeit product.
"We have concluded our investigation on this topic — the device Bluebox obtained is 100% proven to be a counterfeit product purchased through an unofficial channel on the streets in China," Xiaomi spokesperson told The Hacker News in an email statement. "It is therefore not an original Xiaomi product and it is not running official Xiaomi software, as Bluebox has also confirmed in their updated blog post."
This means, Mi4 LTE smartphone owned by Bluebox are tempered by the local Chinese shops itself. What the Heck! Chinese get trolled by Chinese.

XIAOMI DECLINES BLUEBOX CLAIMS
Xiaomi provided a detailed step-by-step explanation on each and every fact and figure:
  1. Hardware: Xiaomi hardware experts have analysed the internal device photos provided to the company by Bluebox and confirmed that the physical hardware is markedly different from the original Mi 4 smartphone.
  2. IMEI number: Xiaomi after-sales team has confirmed that the IMEI on the device from Bluebox is a cloned IMEI number which has been previously used on other counterfeit Xiaomi devices in China.
  3. Software: Xiaomi MIUI team has also confirmed that the software installed on the device from Bluebox is not an official Xiaomi MIUI build.
The company assured its customers that their devices neither come rooted, nor have any malware pre-installed.

Contrary to Bluebox claims, the company also assured its customers that the MIUI used in their products is true Android, which means MIUI follows exact Google's Android CDD (Compatibility Definition Document), and passes all Android CTS tests to make sure a given device is fully Android compatible.

Declining to Bluebox finding, Xiaomi released the following statement in an email to The Hacker News:
As this device is not an original Xiaomi product, and not running an official Xiaomi MIUI software build, Bluebox’s findings are completely inaccurate and not representative of Xiaomi devices. We believe Bluebox jumped to a conclusion too quickly without a fully comprehensive investigation (for example, they did not initially follow our published hardware verification process correctly due to language barrier) and their attempts to contact Xiaomi were inadequate, considering the severity of their accusations.
With the large parallel street market for mobile phones in China, there exists counterfeit products that are almost indistinguishable on the outside. This happens across all brands, affecting both Chinese and foreign smartphone companies selling in China. Furthermore, "entrepreneurial" retailers may add malware and adware to these devices, and even go to the extent of pre-installing modified copies of popular benchmarking software such as CPU-Z and Antutu, which will run "tests" showing the hardware is legitimate.
Xiaomi takes all necessary measures to crack down on the manufacturers of fake devices or anyone who tampers with our software, supported by all levels of law enforcement agencies in China.