Cycle Computing

Cycle Computing
privately held company
Industry software
Founded 2005
Headquarters Greenwich, Connecticut, United States
Area served
Worldwide
Key people
Jason Stowe (CEO)
Website www.cyclecomputing.com

Cycle Computing is a company that provides software for orchestrating computing and storage resources in cloud environments. The flagship product is CycleCloud, which supports Amazon Web Services, Google Compute Engine, Microsoft Azure, and internal infrastructure. The CycleCloud orchestration suite manages the provisioning of cloud infrastructure, orchestration of workflow execution and job queue management, automated and efficient data placement, full process monitoring and logging, within a secure process flow.

History

Cycle Computing was founded in 2005.[1] Its original offerings were based around the HTCondor scheduler and focused on maximizing the effectiveness of internal resources. Cycle Computing offered support for HTCondor as well as CycleServer, which provided metascheduling, reporting, and management tools for HTCondor resources. Early customers spanned a number of industries, including insurance, pharmaceutical, manufacturing, and academia.

With the advent of large public cloud offerings, Cycle Computing expanded its tools to allow customers to make use of dynamically provisioned cloud environments. Key technologies developed include the ability to validate that resources were correctly added in the cloud (patent awarded in 2015[2]), the ability to easily manage data placement and consistency, the ability to support multiple cloud providers within a single workflow, and other technologies.

Large runs

In April 2011, Cycle Computing announced “Tanuki”, a 10,000 core Amazon Web Services cluster used by Genentech.[3]

In September 2011, a Cycle Computing HPC cluster called Nekomata (Japanese for "Monster Cat") was renting out at $1279/hour, offering 30,472 processor cores with 27TB of memory and 2PB of storage. An unnamed pharmaceutical company used the cluster for 7 hours, paying $9000, for a molecular modeling task.[4][5][6]

In April 2012, Cycle Computing announced that, working in collaboration with scientific software-writing company Schrödinger, it had screened 21 million compounds in less than three hours using a 50,000-core cluster.[7]

In November 2013, Cycle Computing announced that, working in collaboration with scientific software-writing company Schrödinger, it had helped Mark Thompson, a professor of chemistry at the University of Southern California, sort through about 205,000 compounds to search for the right compound to build a new generation of inexpensive and highly efficient solar panels. The job took less than a day and cost $33,000 in total. The computing cluster used 156,000 cores spread across 8 regions and had a peak capacity of 1.21 petaFLOPS.[8][9][10][11][12]

In November 2014, Cycle Computing worked with a researcher at HGST to run a hard drive simulation workload. The computation would have taken over a month on internal resources, but completed in 7 hours running on 70,000 cores in Amazon Web Services, at a cost of less than $6,000.[13][14]

In September 2015, Cycle Computing and the Broad Institute announced a 50,000 core cluster to run on Google Compute Engine.[15]

Media coverage

Cycle Computing has been covered by GigaOm,[7][10] Ars Technica,[6] ExtremeTech,[4] CNet,[11] and Phys.org.[9]

Cycle Computing was also mentioned by Amazon CTO Werner Vogels in the 2013 Day 2 Keynote of AWS re:Invent.[16]

References

  1. "About Us". Retrieved February 5, 2015.
  2. "Method and system for automatically detecting and resolving infrastructure faults in cloud infrastructure".
  3. "Cycle Computing fires up 10,000-core HPC cloud on EC2".
  4. 1 2 Anthony, Sebastian (September 20, 2011). "Rent the world’s 30th-fastest, 30,472-core supercomputer for $1,279 per hour". ExtremeTech. Retrieved January 26, 2014.
  5. "New CycleCloud HPC Cluster Is a Triple Threat: 30000 cores, $1279/Hour, & Grill monitoring GUI for Chef". Cycle Computing. September 19, 2011. Retrieved January 26, 2014.
  6. 1 2 Brodkin, Jon (September 20, 2011). "$1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud A supercomputer built on Amazon's cloud is used for pharma research". Ars Technica. Retrieved January 26, 2014.
  7. 1 2 Darrow, Barb (April 19, 2012). "Cycle Computing spins up 50K core Amazon cluster". GigaOm. Retrieved January 26, 2014.
  8. "Back to the Future: 1.21 petaFLOPS(RPeak), 156,000-core CycleCloud HPC runs 264 years of Materials Science". Cycle Computing. November 12, 2013. Retrieved January 26, 2014.
  9. 1 2 Yirka, Bob (November 12, 2013). "Cycle Computing uses Amazon computing services to do work of supercomputer". Phys.org. Retrieved January 26, 2014.
  10. 1 2 Darrow, Barb (November 12, 2013). "Cycle Computing once again showcases Amazon’s high-performance computing potential". GigaOm. Retrieved January 26, 2014.
  11. 1 2 Shankland, Stephen (November 12, 2013). "Supercomputing simulation employs 156,000 Amazon processor cores: To simulate 205,000 molecules as quickly as possible for a USC simulation, Cycle Computing fired up a mammoth amount of Amazon servers around the globe.". CNet. Retrieved January 26, 2014.
  12. Brueckner, Rich (November 13, 2013). "Slidecast: How Cycle Computing Spun Up a Petascale CycleCloud". Inside HPC. Retrieved January 26, 2014.
  13. "HGST buys 70,000-core cloud HPC Cluster, breaks record, returns it 8 hours later". Retrieved February 5, 2016.
  14. "Cycle Helps HGST Stand Up 70,000 Core AWS Cloud".
  15. "Google, Cycle Computing Pair for Broad Genomics Effort".
  16. Vogels, Werner. "AWS re:Invent 2013 Day 2 Keynote with Werner Vogels". AWS re:Invent 2013. Retrieved January 30, 2014.


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.