TikTok tests an Instagram-style grid and other changes

Short-form video app TikTok, the fourth most downloaded app in the world as of last quarter, is working on several new seemingly Instagram-inspired features — including a Discover page, a grid-style layout similar to Instagram Explore, an Account Switcher, and more.

The features were uncovered this week by reverse-engineering specialist Jane Manchun Wong, who published screenshots of these features and others to Twitter.

A TikTok spokesperson declined to offer further details on the company’s plans, but confirmed the features were things the company is working on.

“We’re always experimenting with new ways to improve the app experience for our community,” the spokesperson said.

The most notable change uncovered by Wong is one to TikTok’s algorithmically generated “For You” page. Today, users flip through each video on this page, one by one, in a vertical feed-style format. The updated version instead offers a grid-style layout, which looks more like Instagram’s Explore page. This design would also allow users to tap on the videos they wanted to watch, while more easily bypassing those they don’t. And because it puts more videos on the page, too, the change could quickly increase the amount of input into TikTok’s recommendation engine about a user’s preferences.

 

Another key change being developed is the addition of a “Discover” tab to TikTok’s main navigation.

The new button appears to replace the current Search tab, which today is labeled with a magnifying glass icon. The Search section currently lets you enter keywords, and returns results that can be filtered by users, sounds, hashtags or videos. It also showcases trending hashtags on the main page. The “Discover” button, meanwhile, has a people icon on it, which hints that it could be helping users find new people to follow on TikTok, rather than just videos and sounds.

This change, if accurately described and made public, could be a big deal for TikTok creators, as it arrives at a time when the app has gained critical mass and has penetrated the mainstream. The younger generation has been caught up in TikTok, finding the TikTok stars more real and approachable than reigning YouTubers.  TikTokers and their fans even swarmed VidCon this month, leading some to wonder if a paradigm shift for online video was soon to come.

A related feature, “Suggested Users” could also come into play here, in terms of highlighting top talent.

Getting on an app’s “Suggested” list is often key to becoming a top creator on the platform. It’s how many Viners and Twitter users initially grew their follower bases, for instance.

However, TikTok diverged from Instagram with the testing of two other new features Wong found which focused on popularity metrics. One test shows the “Like” counts on each video on the Sounds and Hashtags pages, and another shows the number of Downloads on the video itself, in addition to the Likes and Shares.

This would be an interesting change in light of the competitive nature of social media. And its timing is significant. Instagram is now backing away from showing Like counts, in a test running in a half dozen countries. The company made the change in response to public pressure regarding the anxiety that using its service causes.

Of course, in the early days of a social app, Like counts and other metrics are tools that help point users to the breakout, must-follow stars. They also encourage more posting as users try to find content that resonates — which then, in turn, boosts their online fame in a highly trackable way.

TikTok is also taking note of how integrations with other social platforms could benefit its service, similar to how the Facebook, Instagram, WhatsApp and Messenger apps have offered features to drive traffic to one another and otherwise interoperate.

A couple of features Wong found were focused on improving connections with social apps, including one that offered better integration with WhatsApp, and another that would allow users to link their account to Google and Facebook.

A few other changes being tested included an Instagram-like Account switcher interface, a “Liked by Creator” comment badge, and a downgrade to the TikCode (QR code) which moves from the user profile the app’s settings.

Of course, one big caveat here with all of this is that just because a feature is spotted in the app’s code, that doesn’t mean it will launch to the public.

Some of these changes may be tested privately, then scrapped entirely, or are still just works in progress. But being able to see a collection of experiments at one time like this — something that’s not possible without the sort of reverse engineering that Wong does — helps to paint a larger picture of the direction an app may be headed. In TikTok’s case, it seems to understand its potential, as well as when to borrow successful ideas from others who have come before it, and when to go its own direction.

Huawei’s new OS is for industrial use, not Android replacement

Seems Hongmeng isn’t the Android replacement it’s been pitched as, after all. The initial story certainly tracked, as Huawei has been preparing for the very real possibility of life after Google, but the Chinese hardware giant says the operating system is primarily focused on industrial use.

The latest report arrives courtesy of Chinese state news agency, Xinhua, which notes that the OS has been in development for far longer than the Trump-led Huawei ban has been in effect. Hongmeng is a relatively simple operating system compared to the likes of Android, according to SVP, Catherine Chen. The news echoes another recent report that Huawei had initially developed the software for use on IoT devices.

None of this means that Huawei isn’t working on a full mobile operating system, of course. Or that the sees of this new OS couldn’t be adapted to do more.

And given the recent news, such a move would be a pretty good use of the company’s vast resources. After all, it’s no doubt seen the writing on the wall for some time. While no one anticipated that such a ban would arrive so suddenly, questions about the company have been floated in security circles for years now.

New restrictions from the Trump administration barred Huawei from working with American companies like Google, but temporary reprieves have allowed the smartphone maker to employ Android services — at least temporarily. Questions about the company’s health are still very much up in the air, however, as the ban ramps back up.

Instagram will now warn you before your account gets deleted, offer in-app appeals

Instagram this morning announced several changes to its moderation policy, the most significant of which is that it will now warn users if their account could become disabled before that actually takes place. This change goes to address a longstanding issue where users would launch Instagram only to find that their account had been shut down without any warning.

While it’s one thing for Instagram to disable accounts for violating its stated guidelines, the service’s automated systems haven’t always gotten things right. The company has come under fire before for banning innocuous photos, like those of mothers breastfeeding their children, for example, or art. (Or, you know, Madonna.)

Now the company says it will introduce a new notification process that will warn users if their account is at risk of becoming disabled. The notification will also allow them to appeal the deleted content in some cases.

For now, users will be able to appeal moderation decisions around Instagram’s nudity and pornography policies, as well as its bullying and harassment, hate speech, drug sales and counter-terrorism policies. Over time, Instagram will expand the appeal capabilities to more categories.

The change means users won’t be caught off guard by Instagram’s enforcement actions. Plus, they’ll be given a chance to appeal a decision directly in the app, instead of only through the Help Center as before.

Disable Thresholds 2 up EN

In addition, Instagram says it will increase its enforcement of bad actors.

Previously, it could remove accounts that had a certain percentage of content in violation of its policies. But now it also will be able to remove accounts that have a certain number of violations within a window of time.

“Similarly to how policies are enforced on Facebook, this change will allow us to enforce our policies more consistently and hold people accountable for what they post on Instagram,” the company says in its announcement.

The changes follow a recent threat of a class-action lawsuit against the photo-sharing network led by the Adult Performers Actors Guild. The organization claimed Instagram was banning the adult performers’ accounts, even when there was no nudity being shown.

“It appears that the accounts were terminated merely because of their status as an adult performer,” James Felton, the Adult Performers Actors Guild legal counsel, told The Guardian in June. “Efforts to learn the reasons behind the termination have been futile,” he said, adding that the Guild was considering legal action.

The Electronic Frontier Foundation (EFF) also this year launched an anti-censorship campaign, TOSSed Out, which aimed to highlight how social media companies unevenly enforce their terms of service. As part of its efforts, the EFF examined the content moderation policies of 16 platforms and app stores, including Facebook, Twitter, the Apple App Store and Instagram.

It found that only four companies — Facebook, Reddit, Apple and GitHub — had committed to actually informing users when their content was censored as to which community guideline violation or legal request had led to that action.

“Providing an appeals process is great for users, but its utility is undermined by the fact that users can’t count on companies to tell them when or why their content is taken down,” said Gennie Gebhart, EFF associate director of research, at the time of the report. “Notifying people when their content has been removed or censored is a challenge when your users number in the millions or billions, but social media platforms should be making investments to provide meaningful notice.”

Instagram’s policy change focused on cracking down on repeat offenders is rolling out now, while the ability to appeal decisions directly within the app will arrive in the coming months.

The FTC looks to change children’s privacy law following complaints about YouTube

The U.S. Federal Trade Commission is considering an update to the laws governing children’s privacy online, known as the COPPA Rule (or, the Children’s Online Privacy Protection Act). The Rule first went into effect in 2000 and was amended in 2013 to address changes in how children use mobile devices and social networking sites. Now, the FTC believes it may be due for more revisions. The organization is seeking input and comments on possible updates, some of which are specifically focused on how to address sites that aren’t necessarily aimed at children, but have large numbers of child users.

In other words, sites like YouTube .

The FTC’s announcement comes only weeks after U.S. consumer advocacy groups and Senator Ed Markey (D-Mass.) sent complaint letters to the FTC, urging the regulators to investigate YouTube for potential COPPA violations.

The advocacy groups allege that YouTube is hiding behind its terms of service which claim YouTube is “not intended for children under 13” — a statement that’s clearly no longer true. Today, the platform is filled with videos designed for viewing by kids. Google even offers a YouTube Kids app aimed at preschooler to tween-aged children, but it’s optional. Kids can freely browse YouTube’s website and often interact with the service via the YouTube TV app — a platform where YouTube Kids has a limited presence.

youtube kids website

According to the letter written by the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD), Google has now collected personal information from nearly 25 million children in the U.S., and it used this data to engage in “very sophisticated digital marketing techniques.”

The groups want YouTube to delete the children’s data, set up an age-gate on the site, and separate out any kids content into its own app where YouTube will have to properly follow COPPA guidelines.

These demands are among those pushing the FTC to this action.

The Commission says it wants input as to whether COPPA should be updated to better address websites and online services that are not traditionally aimed at children but are used by kids, as well as whether these “general audience platforms” should have to identity and police the child-directed content that’s uploaded by third parties.

In other words, should the FTC amend COPPA so it can protect the privacy of the kids using YouTube?

“In light of rapid technological changes that impact the online children’s marketplace, we must ensure COPPA remains effective,” said FTC Chairman Joe Simons, in a published statement. “We’re committed to strong COPPA enforcement, as well as industry outreach and a COPPA business hotline to foster a high level of COPPA compliance. But we also need to regularly revisit and, if warranted, update the Rule,” he added.

While YouTube is a key focus, the FTC will also seek comment on whether there should be an exception for parental consent for the use of educational technology in schools. And it wants to better understand the implications for COPPA in terms of interactive media, like interactive TV (think Netflix’s Minecraft: Story Mode, for example), or interactive gaming.

More broadly, the FTC wants to know how COPPA has impacted the availability of sites and services aimed at children, it says.

The decision to initiate a review of COPPA was a unanimous decision from the FTC’s five commissioners, which includes three Republicans and two Democrats.

tiktok ftcLed by Simons, the FTC in February took action against Musical.ly (now TikTok), by issuing a record $5.7 million fine for its COPPA violations. Similar to YouTube, the app was used by a number of under-13 kids without parental consent. The company knew this was the case, but continued to collect the kids’ personal information, regardless.

“This record penalty should be a reminder to all online services and websites that target children: We take enforcement of COPPA very seriously, and we will not tolerate companies that flagrantly ignore the law,” Simons had said at the time.

The settlement with TikTok required the company to delete children’s videos and data and restrict underage users from being able to film videos.

It’s unclear why the FTC can’t now require the same of YouTube, given the similarities between the two services, without amending the law.

“They absolutely can and should fine YouTube, not to mention force YouTube to make significant changes, under the current regulations,” says Josh Golin, the Executive Director for CCFC. “As for the YouTube decision – by far the most important COPPA case in the agency’s history – it’s extremely concerning that the Commission appears to be signaling they do not have the authority under the current rules to hold YouTube accountable,” he says.

“COPPA rules could use some updating but the biggest problem with the law is the FTC’s lack of enforcement, which is something the Commission could address right away without a lengthy comment period,” Golin adds.

The FTC says it will hold a public workshop on October 7, 2019 to examine the COPPA Rule.

iOS 13: Here are the new security and privacy features you might’ve missed

In just a few weeks Apple’s new iOS 13, the thirteenth major iteration of its popular iPhone software, will be out — along with new iPhones and a new iPad version, the aptly named iPadOS. We’ve taken iOS 13 for a spin over the past few weeks — with a focus on the new security and privacy features — to see what’s new and how it all works.

Here’s what you need to know.

You’ll start to see reminders about apps that track your location

1 location track

Ever wonder which apps track your location? Wonder no more. iOS 13 will periodically remind you about apps that are tracking your location in the background. Every so often it will tell you how many times an app has tracked where you’ve been in a recent period of time, along with a small map of the location points. From this screen you can “always allow” the app to track your location or have the option to limit the tracking.

You can grant an app your location just once

2 location ask

To give you more control over what data have access to, iOS 13 now lets you give apps access to your location just once. Previously there was “always,” “never” or “while using,” meaning an app could be collecting your real-time location as you’re using it. Now you can grant an app access on a per use basis — particularly helpful for the privacy-minded folks.

And apps wanting access to Bluetooth can be declined access

Screen Shot 2019 07 18 at 12.18.38 PM

Apps wanting to access Bluetooth will also ask for your consent. Although apps can use Bluetooth to connect to gadgets, like fitness bands and watches, Bluetooth-enabled tracking devices known as beacons can be used to monitor your whereabouts. These beacons are found everywhere — from stores to shopping malls. They can grab your device’s unique Bluetooth identifier and track your physical location between places, building up a picture of where you go and what you do — often for targeting you with ads. Blocking Bluetooth connections from apps that clearly don’t need it will help protect your privacy.

Find My gets a new name — and offline tracking

5 find my

Find My, the new app name for locating your friends and lost devices, now comes with offline tracking. If you lost your laptop, you’d rely on its last Wi-Fi connected location. Now it broadcasts its location using Bluetooth, which is securely uploaded to Apple’s servers using nearby cellular-connected iPhones and other Apple devices. The location data is cryptographically scrambled and anonymized to prevent anyone other than the device owner — including Apple — from tracking your lost devices.

Your apps will no longer be able to snoop on your contacts’ notes

8 contact snoop

Another area that Apple is trying to button down is your contacts. Apps have to ask for your permission before they can access to your contacts. But in doing so they were also able to access the personal notes you wrote on each contact, like their home alarm code or a PIN number for phone banking, for example. Now, apps will no longer be able to see what’s in each “notes” field in a user’s contacts.

Sign In With Apple lets you use a fake relay email address

6 sign in

This is one of the cooler features coming soon — Apple’s new sign-in option allows users to sign in to apps and services with one tap, and without having to turn over any sensitive or private information. Any app that requires a sign-in option must use Sign In With Apple as an option. In doing so users can choose to share their email with the app maker, or choose a private “relay” email, which hides a user’s real email address so the app only sees a unique Apple-generated email instead. Apple says it doesn’t collect users’ data, making it a more privacy-minded solution. It works across all devices, including Android devices and websites.

You can silence unknown callers

4 block callers

Here’s one way you can cut down on disruptive spam calls: iOS 13 will let you send unknown callers straight to voicemail. This catches anyone who’s not in your contacts list will be considered an unknown caller.

You can strip location metadata from your photos

7 strip location

Every time you take a photo your iPhone stores the precise location of where the photo was taken as metadata in the photo file. But that can reveal sensitive or private locations — such as your home or office — if you share those photos on social media or other platforms, many of which don’t strip the data when they’re uploaded. Now you can. With a few taps, you can remove the location data from a photo before sharing it.

And Safari gets better anti-tracking features

9 safari improvements

Apple continues to advance its new anti-tracking technologies in its native Safari browser, like preventing cross-site tracking and browser fingerprinting. These features make it far more difficult for ads to track users across the web. iOS 13 has its cross-site tracking technology enabled by default so users are protected from the very beginning.

Read more:

Facebook accused of contradicting itself on claims about platform policy violations

Prepare your best * unsurprised face *: Facebook is being accused of contradicting itself in separate testimonies made on both sides of the Atlantic.

The chair of a UK parliamentary committee which spent the lion’s share of last year investigating online disinformation, going on to grill multiple Facebook execs as part of an enquiry that coincided with a global spotlight being cast on Facebook as a result of the Cambridge Analytica data misuse scandal, has penned another letter to the company — this time asking which versions of claims it has made regarding policy-violating access to data by third party apps on its platform are actually true.

In the letter, which is addressed to Facebook global spin chief and former UK deputy prime minister Nick Clegg, Damian Collins cites paragraph 43 of the Washington DC Attorney General’s complaint against the company — which asserts that the company “knew of other third party applications [i.e. in addition to the quiz app used to siphon data off to Cambridge Analytica] that similarly violated its Platform Policy through selling or improperly using consumer data”, and also failed to take “reasonable measures” to enforce its policy.

The Washington, D.C. Attorney General, Karl Racine, is suing Facebook for failing to protect user data — per allegations filed last December.

Collins’ letter notes Facebook’s denial of the allegations in paragraph 43 — before raising apparently contradictory evidence the company gave the committee last year on multiple occasions, such as the testimony of its CTO Mike Schroepfer, who confirmed it is reviewing whether Palantir improperly used Facebook data, among “lots” of other apps of concern; and testimony by Facebook’s Richard Allen to an international grand committee last November when the VP of EMEA public policy claimed the company has “taken action against a number of applications that failed to meet our policies”.

The letter also cites evidence contained in documents the DCMS committee seized from Six4Three, pertaining to a separate lawsuit against Facebook, which Collins asserts demonstrate “the lax treatment of abusive apps and their developments by Facebook”.

He also writes that these documents show Facebook had special agreements with a number of app developers — that allowed some preinstalled apps to “circumvent users’ privacy settings or platform settings, and to access friends’ information”, as well as noting that Facebook whitelisted some 5,200 apps “according to our evidence”.

“The evidence provided by representatives of Facebook to this Select committee and the International Grand Committee as well as the Six4Three files directly contradict with Facebook’s answer to Paragraph 43 of the complaint filed against Facebook by the Washington, D.C. Attorney General,” he writes.

“If the version of events presented in the answer to the lawsuit is correct, this means the evidence given to this Committee and the International Grand Committee was inaccurate.”

Collins goes on to ask Facebook to “confirm the truthfulness” of the evidence given by its reps last year, and to provide the list of applications removed from its platform in response to policy violations — which, in November, Allan promised to provide the committee with but has so far failed to do so.

We’ve also reached out to Facebook to ask which of the versions of events it’s claimed are true is the one it’s standing by at this time.

Watch an unfiltered interview of PicsArt founder at Disrupt Berlin

Smartphones have become a creative playground thanks to cameras and innovative apps, such as PicsArt. With PicsArt, anybody can add filters, stickers and tweak photos and videos in many different ways. It has been a massive hit with 130 million monthly active users. And that’s why I’m excited to announce that PicsArt founder and CEO Hovhannes Avoyan is joining us at TechCrunch Disrupt Berlin.

PicsArt started with a simple app that lets you edit photos before sharing them. There are many companies in this space, including VSCO, Snapseed and Prisma. But PicsArt has managed to become a cultural phenomenon in many countries including China.

If you’re thinking about editing a photo or video in one way or another, chances are you can do it in PicsArt. In addition to traditional editing tools (cropping, rotating, curves, etc.), you can add filters, auto-beautify your face, change your hair color, add stickers and text, cut out your face and use masks just like in Photoshop… I’m not going to list everything you can do because it’s a long list.

The result is an app packed with features that lets you express yourself, create visual storytelling and improve your social media skills. If you’re an Instagram user, chances you’ve seen more than one photo that has been edited using PicsArt.

picsart

While the app is free with ads, users can also subscribe to a premium subscription to unlock additional features. And PicsArt is not just about editing as you can also use the app as its own social network.

PicsArt is based in the U.S. and has raised $45 million over the years. But the company is also betting big on Armenia with a big engineering team over there.

And it’s a natural fit as Hovhannes Avoyan is originally from Armenia. In addition to PicsArt, he has founded many successful startups in the past — Lycos, Bertelsmann, GFI, Teamviewer and Helpsystems. Many entrepreneurs would have a hard time founding just one of these companies, so I can’t wait to hear how Avoyan manages to work on so many different products and turn those products into successes.

Buy your ticket to Disrupt Berlin to listen to this discussion and many others. The conference will take place on December 11-12.

In addition to panels and fireside chats, like this one, new startups will participate in the Startup Battlefield to compete for the highly coveted Battlefield Cup.




Hovhannes Avoyan is a serial entrepreneur, investor and scholar. He is the founder and CEO of PicsArt, the #1 photo and video editing app and community with more than 130 million monthly active users. PicsArt is backed by Sequoia Capital, Insight Venture Partners, DCM and Siguler Guff. The company employs more than 350 people and is headquartered in San Francisco with offices across the globe in Yerevan, Armenia; Los Angeles; Beijing; and an AI Lab in Moscow.

Avoyan brings more than 25 years of experience in computer programming and global business management. Prior to PicsArt, Avoyan founded five other startups, all of which had successful acquisitions by global companies including Lycos, Bertelsmann, GFI, Teamviewer, and Helpsystems.

He is a graduate of Harvard Business School’s Bertelsmann Senior Executive's program. He received his B.S. and M.S. from the State Engineering University of Armenia and his M.A. in Political Science and International Affairs from the American University of Armenia. He’s also a frequent speaker at business conferences on topics ranging from business strategy to international team building and Al.

FaceApp gets federal attention as Sen. Schumer raises alarm on data use

It’s been hard to get away from FaceApp over the last few days, whether it’s your friends posting weird selfies using the app’s aging and other filters, or the brief furore over its apparent (but not actual) circumvention of permissions on iPhones. Now even the Senate is getting in on the fun: Sen. Chuck Schumer (D-NY) has asked the FBI and the FTC to look into the app’s data handling practices.

“I write today to express my concerns regarding FaceApp,” he writes in a letter sent to FBI Director Christopher Wray and FTC Chairman Joseph Simons. I’ve excerpted his main concerns below:

In order to operate the application, users must provide the company full and irrevocable access to their personal photos and data. According to its privacy policy, users grant FaceApp license to use or publish content shared with the application, including their username or even their real name, without notifying them or providing compensation.

Furthermore, it is unclear how long FaceApp retains a user’s data or how a user may ensure their data is deleted after usage. These forms of “dark patterns,” which manifest in opaque disclosures and broader user authorizations, can be misleading to consumers and may even constitute a deceptive trade practices. Thus, I have serious concerns regarding both the protection of the data that is being aggregated as well as whether users are aware of who may have access to it.

In particular, FaceApp’s location in Russia raises questions regarding how and when the company provides access to the data of U.S. citizens to third parties, including potentially foreign governments.

For the cave-dwellers among you (and among whom I normally would proudly count myself) FaceApp is a selfie app that uses AI-esque techniques to apply various changes to faces, making them look older or younger, adding accessories, and, infamously, changing their race. That didn’t go over so well.

There’s been a surge in popularity over the last week, but it was also noticed that the app seemed to be able to access your photos whether you said it could or not. It turns out that this is actually a normal capability of iOS, but it was being deployed here in somewhat of a sneaky manner and not as intended. And arguably it was a mistake on Apple’s part to let this method of selecting a single photo go against the “never” preference for photo access that a user had set.

Fortunately the Senator’s team is not worried about this or even the unfounded (we checked) concerns that FaceApp was secretly sending your data off in the background. It isn’t. But it very much does send your data to Russia when you tell it to give you an old face, or a hipster face, or whatever. Because the computers that do the actual photo manipulation are located there — these filters are being applied in the cloud, not directly on your phone.

His concerns are over the lack of transparency that user data is being sent out to servers who knows where, to be kept for who knows how long, and sold to who knows whom. Fortunately the obliging FaceApp managed to answer most of these questions before the Senator’s letter was ever posted.

The answers to his questions, should we choose to believe them, are that user data is not in fact sent to Russia, the company doesn’t track users and usually can’t, doesn’t sell data to third parties, and deletes “most” photos within 48 hours.

Although the “dark patterns” of which the Senator speaks are indeed an issue, and although it would have been much better if FaceApp had said up front what it does with your data, this is hardly an attempt by a Russian adversary to build up a database of U.S. citizens.

While it is good to see Congress engaging with digital privacy, asking the FBI and FTC to look into a single app seems unproductive when that app is not doing much that a hundred others, American and otherwise, have been doing for years. Cloud-based processing and storage of user data is commonplace — though usually disclosed a little better.

Certainly as Sen. Schumer suggests, the FTC should make sure that “there are adequate safeguards in place to protect the privacy of Americans…and if not, that the public be made aware of the risks associated with the use of this application or others similar to it.” But this seems the wrong nail to hang that on. We see surreptitious slurping of contact lists, deceptive deletion promises, third-party sharing of poorly anonymized data, and other bad practices in apps and services all the time — if the federal government wants to intervene, let’s have it. But let’s have a law or a regulation, not a strongly worded letter written after the fact.

Schumer Faceapp Letter by TechCrunch on Scribd