Adobe Tests Doubling the Price of Photography Plan With Photoshop and Lightroom

Adobe today quietly debuted new pricing for its Photography bundle, which has long been available for $9.99 per month. Starting today, Adobe’s website is listing a price tag of $19.99 per month, which is double the previous price.

The bundle includes access to Photoshop CC, Lightroom CC, and Lightroom Classic, and is aimed at photographers. In a statement provided to PetaPixel, Adobe said that it is testing new pricing tiers.

“From time to time, we run tests on Adobe.com which cover a range of items, including plan options that may or may not be presented to all visitors to Adobe.com. We are currently running a number of tests on Adobe.com.”

Most users appear to be seeing the updated pricing on the Adobe website, but there is a hidden section of the site where one can still purchase the Photography plan for $9.99 per month.

The new plan does offer 1TB of storage instead of 20GB of storage, but for those who do not use Adobe storage, the new pricing doubles the cost with no added benefit.

It is not clear if Adobe is planning permanent pricing changes for its Photography plan or if prices are going to change for existing subscribers in the future. If you previously signed up for the Photography option, you’re likely paying $9.99 per month at the current time.

Adobe offers other plans, pricing a single app at $20.99 and access to all apps at $52.99 per month, but it has offered the lower-cost $9.99 per month Photography plan option since 2013.

Tag: Adobe

This article, "Adobe Tests Doubling the Price of Photography Plan With Photoshop and Lightroom" first appeared on MacRumors.com

Discuss this article in our forums

U.S.-based investment-grade corporate bond funds see 14th week of inflows

Investors gravitated toward the higher-quality spectrum of the credit markets this week, as U.S.-based investment-grade corporate bond funds attracted about $374.5 million in net cash, their 14th…

A quiet London-based payments startup just raised among the biggest Series A rounds ever in Europe

You probably haven’t heard of Checkout, a digital payments processing company that was founded in 2012 in London. Apparently, however, investors have been keeping tabs on the low-flying company and like what they see. Today, Checkout announced that it has raised $230 million in Series A funding at a valuation just shy of $2 billion co-led by Insight Partners and DST Global, with participation from GIC, the Singaporean sovereign-wealth fund, Blossom Capital, Endeavor Catalyst and other, unnamed strategic investors.

It’s the first institutional round for the company; it’s also one of the biggest Series A rounds ever for a European company.

What’s so special about Checkout that investors felt compelled to write such big checks? In a sea filled with fintech startups, it’s hard to know at first glance what differentiates it — or whether investors merely spy a huge opportunity, particularly given the company’s recent revenue numbers.

Checkout helps businesses — including Samsung, Adidas, Deliveroo and Virgin, among others — to accept a range of payment types across their online stores around the world. According to the WSJ, the fees from these services are adding up, too. It says Checkout’s European business generated $46.8 million in gross revenue and $6.7 million in profit in 2017, information it dug up through Companies House, the United Kingdom’s registrar of companies.

Checkout also plays into two huge trends that seem to be lifting all boats — the ongoing boom in online shopping, and the growing number of businesses using online payments. Little wonder that investors poured into payments startups last year more than four times what they invested in them in 2017 ($22 billion, according to Dow Jones VentureSource data cited by the WSJ).

Little wonder, too, that payments startups that have gone public are faring well, including the global payments company Adyen, which IPO’d on the Euronext in June of last year and has mostly seen its shares move in one direction since. Indeed, the company, valued at $2.3 billion by investors in 2015, is now valued at nearly $21 billion.

Though Checkout’s Series A is stunning for its size, according to Dealroom data, it isn’t the largest for a European company. Among other giant rounds, the U.K.-based biotech company Immunocore closed on $320 million in Series A funding in 2015. In 2017, another U.K. fintech, OakNorth, a digital bank that focuses on loans for small and medium enterprises, raised $200 million in Series A funding. (It has gone on to raise roughly $850 million altogether.)

More recently, TradePlus24, a two-year-old, Zurich, Switzerland-based fintech company that insures against default the accounts receivables of small and mid-size businesses, also raised a healthy amount: $120 million in Series A funding. Its backers include Credit Suisse and the insurance broker Kessler.

Microsoft makes a push to simplify machine learning

Ahead of its Build conference, Microsoft today released a slew of new machine learning products and tweaks to some of its existing services. These range from no-code tools to hosted notebooks, with a number of new APIs and other services in-between. The core theme, here, though, is that Microsoft is continuing its strategy of democratizing access to AI.

Ahead of the release, I sat down with Microsoft’s Eric Boyd, the company’s corporate vice president of its AI platform, to discuss Microsoft’s take on this space, where it competes heavily with the likes of Google and AWS, as well as numerous, often more specialized startups. And to some degree, the actual machine learning technologies have become table stakes. Everybody now offers pre-trained models, open-source tools and the platforms to train, build and deploy models. If one company doesn’t have pre-trained models for some use cases that its competitors support, it’s only a matter of time before it will. It’s the auxiliary services and the overall developer experience, though, where companies like Microsoft, with its long history of developing these tools, can differentiate themselves.

Microsoft’s Eric Boyd

“AI is really impacting the way the world does business,” Boyd said. “We see 75% of commercial enterprises are doing more with AI in the next several years. It’s tripled in the last couple years, according to Gartner. And so, we’re really seeing an explosion in the amount of work that’s coming from there. As people are driving this forward, as companies are driving this forward, developers are on the front lines, trying to figure out how to move their companies forward, how to build these models and how to build these applications, and help scale with all the changes that are moving through this.”

What these companies — and their developers — need is more powerful tools that allow them to become more productive and build their models faster. At Microsoft, where these companies are often large enterprises, that also includes being able to scale up to the needs of an enterprise and offer the security guarantees they need.

As companies start adopting machine learning, though, they are now also getting to a point where they have moved from a few tests to maybe running a hundred models in production. That comes with its own challenges. “They are trying to figure out how to manage the life cycle of these models,” he said. “How do I think of the operational cycle? How do I think about a new model that I’m ready to deploy? When is it ready to go?”

Only a few years ago, the industry started moving to a DevOps model for managing code. What Microsoft essentially wants to move to is MLOps for managing models. “It’s very similar to DevOps, but there’s some distinct differences in terms of how the tools operate,” Boyd noted. “At Microsoft, we’re really focusing on how do we solve these problems to make developers way more productive, using these enterprise tools to drive these changes that they need across their organization.” This means thinking about how to bring concepts like source control and continuous development to machine learning models, for example, and that will take new tools.

It’s no surprise then that adding more MLOps capabilities is a major part of today’s releases. The company is integrating some of these functions into Azure DevOps, for example, that allows them to trigger release pipelines. The company is also giving developers and data scientists tools for model version control, for example, to track and manage their assets and to share machine learning pipelines.

These are very much tools for advanced machine learning practitioners, though. On the other side of the spectrum, Microsoft also announced a number of automated machine learning tools, including one that essentially automates all of the processes, as well as a visual model builder, which grew out of the Azure ML Studio. As Boyd told me, even companies like British Petroleum and Oregon’s Deschutes Brewery (try their Black Butte Porter if you get a chance) now use these tools.

“We’ve added a bunch of features into automated machine learning to simplify how people are trying to use this kind of work,” Boyd noted.

Microsoft today also launched a number of new services in its Cognitive Services lineup, including a new personalization service, an API for recognizing handwriting and another one for transcribing conversations with multiple speakers. The personalization service stands out here because it uses reinforcement learning, a different machine learning technique from most other Cognitive Services tools, and because it is far easier to implement than similar services. For business users, there’s also the Form Recognizer, which makes extracting data from forms easy.

What’s more interesting that the specific features, though, is that Microsoft is shifting its emphasis here a little bit. “We’re moving away from some of the first-level problems of ‘here’s the table stakes, you have to have an AI platform,’ to much more sophisticated use cases around the operations of these algorithms, the simplification of them, new user experiences to really simplify how developers work and much richer cognitive services,” Boyd explained.

Microbiome testing service uBiome puts its co-founders on administrative leave after FBI raid

The microbiome testing service uBiome has placed its founders and co-chief executives, Jessica Richman and Zac Apte, on administrative leave following an FBI raid on the company’s offices last week.

The company’s board of directors have named John Rakow, currently the company’s general counsel, as its interim chairman and chief executive, the company said in a statement.

Directors of the company are also conducting an independent investigation into the company’s billing practices, which is being overseen by a special committee of the board.

It was only last week that the FBI went to the company’s headquarters to search for documents related to an ongoing investigation. What’s at issue is the way that the company was billing insurers for the microbiome tests it was performing on customers.

“As interim CEO of uBiome, I want all of our stakeholders to know that we intend to cooperate fully with government authorities and private payors to satisfactorily resolve the questions that have been raised, and we will take any corrective actions that are needed to ensure we can become a stronger company better able to serve patients and healthcare providers,” Rakow said in a statement.

”My confidence is based on the significant clinical evidence and medical literature that demonstrates the utility and value of uBiome’s products as important tools for patients, health care providers and our commercial partners.” added Mr. Rakow.

It’s been a rough few weeks for consumer companies working on developing microbiome testing services and treatments based on those diagnosis. In addition to the FBI raid, the Seattle-based company, Arivale, was forced to shut down its “consumer program” after raising more than $50 million from investors, including Maveron, Polaris Partners and ARCH Venture Partners.

UBiome is backed by investors including Andreessen Horowitz, OS Fund, 8VC, Y Combinator, DNA Capital, Crunchfund, StartX, Kapor Capital, Starlight Ventures and 500 Startups.

Microsoft extends its Cognitive Services with personalization service, handwriting recognition APIs and more

As part of its rather bizarre news dump before its flagship Build developer conference next week, Microsoft today announced a slew of new pre-built machine learning models for its Cognitive Services platform. These include an API for building personalization features, a form recognizer for automating data entry, a handwriting recognition API and an enhanced speech recognition service that focuses on transcribing conversations.

Maybe the most important of these new services is the Personalizer. There are few apps and web sites, after all, that aren’t looking to provide their users with personalized features. That’s difficult, in part, because it often involves building models based on data that sits in a variety of silos. With Personalizer, Microsoft is betting on reinforcement learning, a machine learning technique that doesn’t need the kind of labeled training data typically used in machine learning. Instead, the reinforcement agent constantly tries to find the best way to achieve a given goal based on what users do. Microsoft argues that it is the first company to offer a service like this and the company itself has been testing the services on its Xbox, where it saw a 40% increase in engagement with its content after it implemented this service.

The handwriting recognition API, or Ink Recognizer as it is officially called, can automatically recognize handwriting, common shapes and documents. That’s something Microsoft has long focused on as it developed its Windows 10 inking capabilities, so maybe it’s no surprise that it is now packaging this up as a cognitive service, too. Indeed, Microsoft Office 365 and Windows use exactly this service already, so we’re talking about a pretty robust system. With this new API, developers can now bring these same capabilities to their own applications, too.

Conversation Transcription does exactly what the name implies: it transcribes conversations and it’s part of Microsoft’s existing speech-to-text features in the Cognitive Services lineup. It can label different speakers, transcribe the conversation in real time and even handle crosstalk. It already integrates with Microsoft Teams and other meeting software.

Also new is the Form Recognizer, a new API that makes it easier to extract text and data from business forms and documents. This may not sound like a very exciting feature, but it solves a very common problem and the service needs only five samples to understand how to extract data and users don’t have to do any of the arduous manual labeling that’s often involved in building these systems.

Form Recognizer is also coming to cognitive services containers, which allow developers to take these models outside of Azure and to their edge devices. The same is true for the existing speech-to-text and text-to-speech services, as well as the existing anomaly detector.

In addition, the company also today announced that its Neural Text-to-Speech, Computer Vision Read and Text Analytics Named Entity Recognition APIs are now generally available.

Some of these existing services are also getting some feature updates, with the Neural Text-to-Speech service now supporting five voices, while the Computer Vision API can now understand more than 10,000 concepts, scenes and objects, together with 1 million celebrities, compared to 200,000 in a previous version (are there that many celebrities?).

Microsoft brings Plug and Play to IoT

Microsoft today announced that it wants to bring the ease of use of Plug and Play, which today allows you to plug virtually any peripheral into a Windows PC without having to worry about drivers, to IoT devices. Typically, getting an IoT device connected and up and running takes some work, even with modern deployment tools. The promise of IoT Plug and Play is that it will greatly simplify this process and do away with the hardware and software configuration steps that are still needed today.

As Azure corporate vice president Julia White writes in today’s announcement, “one of the biggest challenges in building IoT solutions is to connect millions of IoT devices to the cloud due to heterogeneous nature of devices today – such as different form factors, processing capabilities, operational system, memory and capabilities.” This, Microsoft argues, is holding back IoT adoption.

IoT Plug and Play, on the other hand, offers developers an open modeling language that will allow them to connect these devices to the cloud without having to write any code.

Microsoft can’t do this alone, though, since it needs the support of the hardware and software manufacturers in its IoT ecosystem, too. The company has already signed up a number of partners, including Askey, Brainium, Compal, Kyocera, STMicroelectronics, Thundercomm and VIA Technologies . The company says that dozens of devices are already Plug and Play-ready and potential users can find them in the Azure IoT Device Catalog.