Amazon Patent Applications

Patents granted to Amazon by the U.S. Patent and Trademark Office (USPTO).

  • Publication number: 20180199204
    Abstract: A provisioning device may be shipped to a client and used to automatically provision an IoT device to join a local network to communicate with a remote service provider. In response to a trigger input, the provisioning device creates a wireless hotspot that is recognizable by an IoT device as a provisioning hotspot. The provisioning device receives a signal from the IoT device indicating that the IoT device is available to be provisioned. The provisioning device obtains provisioning data and transmits the provisioning data to the IoT device. The IoT device uses the provisioning data to connect to a local wireless network and to establish a connection to the remote service provider. The IoT device may then use one or more IoT services of the service provider.
    Type: Application
    Filed: March 5, 2018
    Publication date: July 12, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Kyle Michael Roche, James Christopher Sorenson, III
  • Publication number: 20180196827
    Abstract: Methods, apparatus, and computer-accessible storage media for controlling export of snapshots to external networks in service provider environments. Methods are described that may be used to prevent customers of a service provider from downloading snapshots of volumes, such as boot images created by the service provider or provided by third parties, to which the customer does not have the appropriate rights. A request may be received from a user to access one or more snapshots, for example a request to export the snapshot or a request for a listing of snapshots. For each snapshot, the service provider may determine if the user has rights to the snapshot, for example by checking a manifest for the snapshot to see if entries in the snapshot manifest belong to an account other than the customer's. If the user has rights to the snapshot, the request is granted; otherwise, the request is not granted.
    Type: Application
    Filed: March 9, 2018
    Publication date: July 12, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Arun Sundaram, Yun Lin, David Carl Salyers
  • Publication number: 20180198863
    Abstract: A network-based data store may implement retention-based management techniques for data stored at the network-based data store. When data is received for storage at the network-based data store, a retention time for the data may be determined. Storage locations at persistent storage devices of the network-based data store may be selected according to the retention time. The data may then be placed at the storage locations. When a request to delete data is received, retention times of co-located data may be evaluated to determine whether the deletion may be delayed. Delayed deletions may allow the data to be subsequently deleted with at least some of the co-located data. Repair operations to maintain the data according to a durability policy may be modified according to retention time for a data suffering a loss of redundancy.
    Type: Application
    Filed: March 5, 2018
    Publication date: July 12, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Timothy James Davis, Rajesh Shanker Patel, Bradley Eugene Marshall, Jonathan Robert Collins
  • Publication number: 20180197122
    Abstract: Methods and apparatus for portable network interfaces to manage authentication and license enforcement. A system may include a plurality of resource instances including a producer instance configured to implement a network-accessible service, and an authentication coordinator. The coordinator may assign an interface record to the service, wherein the interface record comprises an IP address and a set of security properties. The coordinator may configure the security properties to allow a client to request an attachment of the interface record to a selected resource instance, such that the selected resource instance is enabled to transmit network messages from the IP address using one or more physical network interfaces of the selected resource instance. The producer resource instance initiates authentication operations for the service, including at least one authentication operation based on the IP address of the interface record.
    Type: Application
    Filed: March 9, 2018
    Publication date: July 12, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Christopher Richard Jacques de Kadt, James Alfred Gordon Greenfield
  • Publication number: 20180196782
    Abstract: Systems and methods for presenting a user interface in a first mode and a second mode based on detection of a touch gesture is described herein. In some embodiments, a first user interface may be presented on an electronic device's display. The first user interface may include a list of items, which may be formatted such that they are optimally viewable from a first distance away from the display. In response to detecting a touch gesture, such as a scrolling gesture, a second user interface may be presented including the list of items, which may be formatted such that they are optimally viewed from a second distance. For example, the first user interface may be optimally viewable from a distance of approximately seven to ten feet from the display. As another example, the second user interface may optimally viewable from a distance of approximately one to three feet.
    Type: Application
    Filed: June 14, 2016
    Publication date: July 12, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Timothy Thomas Gray, Thomas Irvine Nelson, Jae Pum Park, Shilpan Bhagat
  • Publication number: 20180189336
    Abstract: A distributed storage system may store data object instances in persistent storage and may cache keymap information for those data object instances. The system may cache a latest symbolic key entry for some user keys of the data object instances. When a request is made for the latest version of stored data object instances having a specified user key, the latest version may be determined dependent on whether a latest symbolic key entry exists for the specified user key, and keymap information for the latest version may be returned. When storing keymap information, a flag may be set to indicate that a corresponding latest symbolic key entry should be updated. The system may delete a latest symbolic key entry for a particular user key from the cache in response to determining that no other requests involving the keymap information for data object instances having the particular user key are pending.
    Type: Application
    Filed: January 22, 2018
    Publication date: July 5, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Jason G. McHugh, Praveen Kumar Gattu, Michael A. Ten-Pow, Derek Ernest Denny-Brown, II
  • Publication number: 20180189367
    Abstract: A programmatic interface is implemented, enabling a client of a stream management service to select a data ingestion policy for a data stream. A client request selecting an at-least-once ingestion policy is received. In accordance with the at-least-once policy, a client may transmit an indication of a data record one or more times to the service until a positive acknowledgement is received. In response to receiving a plurality of transmissions indicating a particular data record, respective positive acknowledgements are sent to the client. Based on a persistence policy selected for the stream, copies of the data record are stored at one or more storage locations in response to one particular transmission of the plurality of transmissions.
    Type: Application
    Filed: December 29, 2017
    Publication date: July 5, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Marvin Michael Theimer, Gaurav D. Ghare, John David Dunagan, Greg Burgess, Ying Xiong
  • Publication number: 20180189373
    Abstract: For a given cross-data-store transaction request at a storage service, a coordinator transmits respective voting transition requests to a plurality of log-based transaction managers (LTMs) configured for the respective data stores to which writes are directed in the transaction. The LTMs transmit responses to the coordinator based on data-store-specific conflict detection performed using contents of the voting transition requests and respective data-store-specific state transition logs. The coordinator determines a termination status of the cross-data-store transaction based on the LTMs' responses, and provides an indication of the termination status to the LTMs.
    Type: Application
    Filed: February 26, 2018
    Publication date: July 5, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Uphendra Bhalchandra Shevade, Gregory Rustin Rogers, Christopher Ian Hendrie
  • Publication number: 20180182061
    Abstract: Methods, systems, and computer-readable media for placement optimization for virtualized graphics processing are disclosed. A provider network comprises a plurality of instance locations for physical compute instances and a plurality of graphics processing unit (GPU) locations for physical GPUs. A GPU location for a physical GPU or an instance location for a physical compute instance is selected in the provider network. The GPU location or instance location is selected based at least in part on one or more placement criteria. A virtual compute instance with attached virtual GPU is provisioned. The virtual compute instance is implemented using the physical compute instance in the instance location, and the virtual GPU is implemented using the physical GPU in the GPU location. The physical GPU is accessible to the physical compute instance over a network. An application is executed using the virtual GPU on the virtual compute instance.
    Type: Application
    Filed: February 26, 2018
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: NICHOLAS PATRICK WILT, ASHUTOSH TAMBE
  • Publication number: 20180181470
    Abstract: A system that implements a data storage service may store data on behalf of storage service clients. The system may maintain data in multiple replicas of partitions that are stored on respective computing nodes in the system. A master replica for a replica group may increment a membership version indicator for the group, and may propagate metadata (including the membership version indicator) indicating a membership change for the group to other members of the group. Propagating the metadata may include sending a log record containing the metadata to the other replicas to be appended to their respective logs. Once the membership change becomes durable, it may be committed. A replica attempting to become the master of a replica group may determine that another replica in the group has observed a more recent membership version, in which case logs may be synchronized or snipped, or the attempt may be abandoned.
    Type: Application
    Filed: February 2, 2018
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Timothy Andrew Rath, Jakub Kulesza, David Alan Lutz
  • Publication number: 20180182062
    Abstract: Methods, systems, and computer-readable media for application-specific virtualized graphics processing are disclosed. A virtual compute instance is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is selected based at least in part on requirements of an application. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. The application is executed using the virtual GPU on the virtual compute instance.
    Type: Application
    Filed: February 26, 2018
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Nicholas Patrick Wilt, Ashutosh Tambe, Nathan Lee Burns
  • Publication number: 20180181330
    Abstract: A data storage system includes multiple head nodes and data storage sleds. The data storage sleds include multiple mass storage devices and a sled controller. Respective ones of the head nodes are configured to obtain credentials for accessing particular portions of the mass storage devices of the data storage sleds. A sled controller of a data storage sled determines whether a head node attempting to perform a write on a mass storage device of a data storage sled that includes the sled controller is presenting with the write request a valid credential for accessing the mass storage devices of the data storage sled. If the credentials are valid, the sled controller causes the write to be performed and if the credentials are invalid, the sled controller returns a message to the head node indicating that it has been fenced off from the mass storage device.
    Type: Application
    Filed: December 28, 2016
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: NORBERT P. KUSTERS, NACHIAPPAN ARUMUGAM, CHRISTOPHER NATHAN WATSON, MARC JOHN BROOKER, DAVID R. RICHARDSON, DANNY WEI, JOHN LUTHER GUTHRIE, II, LEAH SHALEV
  • Publication number: 20180181348
    Abstract: A data storage system includes multiple data storage units and a zonal control plane. The zonal control plane assigns volumes to respective ones of the data storage units. The data storage units include multiple head nodes and data storage sleds. At least one of the head nodes implements a local control plane for the data storage unit. Also, the head nodes of each data storage unit are configured to service read and write requests directed to one or more volumes serviced by the data storage unit independent of the zonal control plane.
    Type: Application
    Filed: December 28, 2016
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: NORBERT P. KUSTERS, NACHIAPPAN ARUMUGAM, CHRISTOPHER NATHAN WATSON, MARC JOHN BROOKER, DAVID R. RICHARDSON, DANNY WEI, JOHN LUTHER GUTHRIE, II
  • Publication number: 20180181315
    Abstract: A data storage system includes multiple head nodes and multiple data storage sleds mounted in a rack. For a particular volume or volume partition one of the head nodes is designated as a primary head node for the volume or volume partition. The primary head node is configured to store data for the volume in a data storage of the primary head node and cause the data to be replicated to a secondary head node. The primary head node is also configured to cause the data for the volume to be stored in a plurality of respective mass storage devices each in different ones of the plurality of data storage sleds of the data storage system.
    Type: Application
    Filed: December 28, 2016
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: NORBERT P. KUSTERS, NACHIAPPAN ARUMUGAM, CHRISTOPHER NATHAN WATSON, MARC JOHN BROOKER, DAVID R. RICHARDSON, DANNY WEI, JOHN LUTHER GUTHRIE, II
  • Publication number: 20180183868
    Abstract: A data storage system includes a rack, multiple head nodes, multiple data storage sleds, and at least two networking devices. The at least two network devices are configured to implement at least two redundant networks within the data storage system. Also, each of the head nodes is assigned at least two network addresses for communication with the data storage sleds of the data storage system via the at least two networking devices. The data storage sleds each include multiple mass storage devices and a sled controller that is configured to couple with the at least two network switches. In some embodiments, the data storage system further includes redundant power systems within a rack in which the head nodes, the data storage sleds, and the at least two networking devices are mounted.
    Type: Application
    Filed: December 28, 2016
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: NORBERT P. KUSTERS, NACHIAPPAN ARUMUGAM, CHRISTOPHER NATHAN WATSON, MARC JOHN BROOKER, DAVID R. RICHARDSON, DANNY WEI, JOHN LUTHER GUTHRIE, II
  • Publication number: 20180184548
    Abstract: A data center may include a tape library rack module along with rack computer systems. The rack computer systems may be configured to provide computing capacity within a data center environment. In some embodiments, the tape library rack module may include an enclosure encompassing an interior of the tape library rack module, a rack within the interior, and a tape library unit mounted on the rack. The tape library rack unit may include tape cartridges configured to store data within a tape environment that is different than the data center environment. The tape library rack unit may be within a portion of the interior that is enclosed such that it is environmentally isolated from the data center environment. In some examples, the tape library rack module may include a cooling unit and/or a humidifier unit, which may provide the tape environment to the environmentally isolated portion of the interior of the tape library rack module.
    Type: Application
    Filed: February 2, 2018
    Publication date: June 28, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Darin Lee Frink, Kevin Bailey, Peter George Ross, Bryan James Donlan, James Caleb Kirschner, Mary Crys Calansingin, Paul David Franklin, Mastaka Kubo
  • Publication number: 20180173774
    Abstract: History for data objects may be maintained to detect data events. An indication of an Extract, Transform, Load (ETL) process applied to one or more source data objects to generate one or more transformed data objects may be received. History for the source data objects may be updated to include the transformed data objects and the ETL process that generated the transformed data objects. An evaluation of the update may be performed to determine whether an event associated with the data lineage is triggered. If the event is triggered, a notification of the event may be sent to one or more subscribers for the event.
    Type: Application
    Filed: December 20, 2016
    Publication date: June 21, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: GEORGE STEVEN MCPHERSON, MEHUL A. SHAH, PRAJAKTA DATTA DAMLE, GOPINATH DUDDI, ANURAG WINDLASS GUPTA
  • Publication number: 20180174226
    Abstract: Processes for receiving, packaging and/or processing orders for items offered by an electronic marketplace are described, including methods whereby certain products of interest may be identified and subjected to alternative inbound and/or outbound processing that omit certain steps used in similar processes. A first inbound process may include steps related to shelving, or otherwise storing, received products, prior to packaging such products for mailing and delivery to consumers. In some examples, a second inbound process may omit one or more of the shelving/storing steps from the first inbound process, and pre-package the received product for mailing.
    Type: Application
    Filed: September 19, 2013
    Publication date: June 21, 2018
    Applicant: Amazon Technologies, Inc.
    Inventor: Jong Hwa Yoon
  • Publication number: 20180165876
    Abstract: A real-time video exploration (RVE) system that allows users to pause, step into, and explore 2D or 3D modeled worlds of scenes in a video. The system may leverage network-based computation resources to render and stream new video content from the models to clients with low latency. A user may pause a video, step into a scene, and interactively change viewing positions and angles in the model to move through or explore the scene. The user may resume playback of the recorded video when done exploring the scene. Thus, rather than just viewing a pre-rendered scene in a movie from a pre-determined perspective, a user may step into and explore the scene from different angles, and may wander around the scene at will within the scope of the model to discover parts of the scene that are not visible in the original video.
    Type: Application
    Filed: February 9, 2018
    Publication date: June 14, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Gerard Joseph Heinz, II, Michael Schleif Pesce, Collin Charles Davis, Michael Anthony Frazzini, Ashraf Alkarmi, Michael Martin George, David A. Limp, William Dugald Carr, JR.
  • Publication number: 20180165340
    Abstract: A distributed data warehouse system maintains data blocks on behalf of clients, and stores primary and secondary copies of data blocks on different disks or nodes in a cluster. The data warehouse system may back up data blocks in a key-value backup storage system. In response to a query targeting a data block previously stored in the cluster, the data warehouse system may determine whether a consistent, uncorrupted copy of the data block is available in the cluster (e.g., by applying a consistency check). If not (e.g., if a disk or node failed), the data warehouse system may automatically initiate an operation to restore the data block from the backup storage system, using a unique identifier of the data block to access a backup copy. The target data may be returned in a query response prior to restoring primary and secondary copies of the data block in the cluster.
    Type: Application
    Filed: February 9, 2018
    Publication date: June 14, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Deepak Agarwal, Anurag Windlass Gupta, Jakub Kulesza
  • Publication number: 20180165785
    Abstract: Methods, systems, and computer-readable media for capacity reservation for virtualized graphics processing are disclosed. A request is received to attach a virtual GPU to a virtual compute instance. The request comprises one or more constraints. Availability information is retrieved from a data store that indicates virtual GPUs available in a provider network and matching the one or more constraints. A virtual GPU is selected from among the available virtual GPUs in the availability information. The selected virtual GPU is reserved for attachment to the virtual compute instance. The virtual compute instance is implemented using CPU resources and memory resources of a physical compute instance, the virtual GPU is implemented using a physical GPU in the provider network, and the physical GPU is accessible to the physical compute instance over a network.
    Type: Application
    Filed: December 12, 2016
    Publication date: June 14, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: DOUGLAS COTTON KURTZ, MALCOLM FEATONBY, UMESH CHANDANI, ADITHYA BHAT, YUXUAN LIU, MIHIR SADRUDDIN SURANI
  • Publication number: 20180167661
    Abstract: A real-time video exploration (RVE) system that allows users to pause, step into, move through, and explore 2D or 3D modeled worlds of scenes in a video. The RVE system may allow users to discover, select, explore, and manipulate objects within the modeled worlds used to generate video content. The RVE system may implement methods that allow users to view and explore in more detail the features, components, and/or accessories of selected objects that are being manipulated and explored. The RVE system may also implement methods that allow users to interact with interfaces of selected objects or interfaces of components of selected objects.
    Type: Application
    Filed: February 9, 2018
    Publication date: June 14, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Gerard Joseph Heinz, II, Michael Schleif Pesce, Collin Charles Davis, Michael Anthony Frazzini, Ashraf Alkarmi, Michael Martin George, David A. Limp, Wiiliam Dugald Carr, JR.
  • Publication number: 20180159717
    Abstract: Dynamic application instance discovery and state management within a distributed system. A distributed system may implement application instances configured to perform one or more application functions within the distributed system, and discovery and failure detection daemon (DFDD) instances, each configured to store an indication of a respective operational state of each member of a respective group of the number of application instances. Each of the DFDD instances may repeatedly execute a gossip-based synchronization protocol with another one of the DFDD instances, where execution of the protocol between DFDD instances includes reconciling differences among membership of the respective groups of application instances. A new application instance may be configured to notify a particular DFDD instance of its availability to perform an application function.
    Type: Application
    Filed: December 4, 2017
    Publication date: June 7, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: JOHN DAVID CORMIE, AMI K. FISCHMAN, ALLAN H. VERMEULEN
  • Publication number: 20180150397
    Abstract: A separate distributed buffer cache system may be implemented for a storage client of a distributed storage system. Storage I/O requests may be sent from a storage client to one or more buffer cache nodes in a distributed buffer cache system that maintain portions of an in-memory buffer cache to which the requests pertain. The distributed buffer cache system may send the write requests on to the distributed storage system to be completed, and in response to receiving acknowledgements from the storage system, sending a completion acknowledgement back to the storage client. Buffer cache nodes may update buffer cache entries for received requests such that they are not available for reads until complete at the distributed storage system. For read requests where the buffer cache entries at the buffer cache node are invalid, valid data may be obtained from the distributed storage system and sent to the storage client.
    Type: Application
    Filed: January 26, 2018
    Publication date: May 31, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Anurag Windlass Gupta, Matthew David Allen
  • Publication number: 20180152503
    Abstract: A control-plane component of a virtual network interface (VNI) multiplexing service assigns one or more VNIs as members of a first interface group. A first VNI of the interface group is attached to a first compute instance. Network traffic directed to a particular endpoint address associated with the first interface group is to be distributed among members of the first interface group by client-side components of the service. The control-plane component propagates membership metadata of the first interface group to the client-side components. In response to a detection of an unhealthy state of the first compute instance, the first VNI is attached to a different compute instance by the control-plane component.
    Type: Application
    Filed: January 26, 2018
    Publication date: May 31, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: TOBIAS LARS-OLOV HOLGERS, KEVIN CHRISTOPHER MILLER, ANDREW BRUCE DICKINSON, DAVID CARL SALYERS, XIAO ZHANG, SHANE ASHLEY HALL, CHRISTOPHER IAN HENDRIE, ANIKET DEEPAK DIVECHA, RALPH WILLIAM FLORA
  • Publication number: 20180150548
    Abstract: Recognizing unknown data objects may be implemented for data objects stored in a data store. Data objects that are identified as unknown may be accessed to retrieve a portion of the data object. Different representations of the data object may be generated for recognizing different data schemas. An analysis of the representations may be performed to identify a data schema for the unknown data object. The data schema may be stored in a metadata store for the unknown data object.
    Type: Application
    Filed: December 20, 2016
    Publication date: May 31, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: MEHUL A. SHAH, GEORGE STEVEN MCPHERSON, PRAJAKTA DATTA DAMLE, GOPINATH DUDDI, ANURAG WINDLASS GUPTA
  • Publication number: 20180150528
    Abstract: Data transformation workflows may be generated to transform data objects. A source data schema for a data object and a target data format or target data schema for a data object may be identified. A comparison of the source data schema and the target data format or schema may be made to determine what transformations can be performed to transform the data object into the target data format or schema. Code to execute the transformation operations may then be generated. The code may be stored for subsequent modification or execution.
    Type: Application
    Filed: December 20, 2016
    Publication date: May 31, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: MEHUL A. SHAH, GEORGE STEVEN MCPHERSON, PRAJAKTA DATTA DAMLE, GOPINATH DUDDI, ANURAG WINDLASS GUPTA, BENJAMIN ALBERT SOWELL, BOHOU LI
  • Publication number: 20180150529
    Abstract: Extract, Transform, Load (ETL) processing may be initiated by detected events. A trigger event may be associated with an ETL process apply one or more transformations to a source data object. The trigger event may be detected for the ETL process and evaluated with respect to one or more execution conditions for the ETL process. If the execution conditions for the ETL process are satisfied, then the ETL process may be executed. At least some of the source data object may be obtained, the one or more transformations of the ETL process may be applied, and one or more transformed data objects may be stored.
    Type: Application
    Filed: December 20, 2016
    Publication date: May 31, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: GEORGE STEVEN MCPHERSON, MEHUL A. SHAH, PRAJAKTA DATTA DAMLE, GOPINATH DUDDI, ANURAG WINDLASS GUPTA
  • Publication number: 20180152501
    Abstract: Methods, apparatus, and computer-accessible storage media for remotely managing a gateway that serves as an interface between processes on a customer network and a service provider, for example to store data to a remote data store. The gateway sends a connection request to a gateway control server. The server holds the connection until the server receives information (e.g., information from the customer sent via the service provider) for the gateway. The server sends the information as requests via the gateway-initiated connection, and continues to hold the connection. If a server receives information for a gateway to which it does not hold a connection, the server sends the information to the server that does hold the connection. The server may either discover the appropriate server via a registration service that registers connections to gateways or broadcast the information to peer servers identified through a registration service.
    Type: Application
    Filed: January 8, 2018
    Publication date: May 31, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: James Christopher Sorenson, III, Yun Lin, David Carl Salyers, Ankur Khetrapal, Nishanth Alapati
  • Publication number: 20180152502
    Abstract: In certain embodiments, a computer-implemented method includes accessing, using one or more processing units, application parameters associated with an application. The application parameters define constraints for hosting the application using one or more of a plurality of provisioned computing environments available over a computer network from multiple computing resources vendors. Each vendor is associated with a corresponding vendor-specific provisioned computing environment that includes computing resources available to be provisioned for use by a multiple entities distinct from the vendors. The method includes accessing, using the one or more processing units, vendor-specific data for the vendor-specific provisioned computing environments.
    Type: Application
    Filed: January 25, 2018
    Publication date: May 31, 2018
    Applicant: Amazon Technologies, Inc.
    Inventor: Christopher Paul Kirby
  • Publication number: 20180143967
    Abstract: A natural language understanding model is trained using respective natural language example inputs corresponding to a plurality of applications. A determination is made as to whether a value of a first parameter of a first application is to be obtained using a natural language interaction. Using the natural language understanding model, at least a portion of the first application is generated.
    Type: Application
    Filed: November 23, 2016
    Publication date: May 24, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: VIKRAM SATHYANARAYANA ANBAZHAGAN, RAMA KRISHNA SANDEEP POKKUNURI, SWAMINATHAN SIVASUBRAMANIAN, STEFANO STEFANI, VLADIMIR ZHUKOV
  • Publication number: 20180145879
    Abstract: A virtual network verification service for provider networks that leverages a declarative logic programming language to allow clients to pose queries about their virtual networks as constraint problems; the queries may be resolved using a constraint solver engine. Semantics and logic for networking primitives of virtual networks in the provider network environment may be encoded as a set of rules according to the logic programming language; networking security standards and/or client-defined rules may also be encoded in the rules. A description of a virtual network may be obtained and encoded. A constraint problem expressed by a query may then be resolved for the encoded description according to the encoded rules using the constraint solver engine; the results may be provided to the client.
    Type: Application
    Filed: November 22, 2016
    Publication date: May 24, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: John Cook, Catherine Dodge, Sean McLaughlin
  • Publication number: 20180143852
    Abstract: A scheduler of a batch job management service determines that a set of resources a client is insufficient to execute one or more jobs. The scheduler prepares a multi-dimensional statistical representation of resource requirements of the jobs, and transmits it to a resource controller. The resource controller uses the multi-dimensional representation and resource usage state information to make resource allocation change decisions.
    Type: Application
    Filed: November 23, 2016
    Publication date: May 24, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: DOUGAL STUART BALLANTYNE, JAMES EDWARD KINNEY, JR., Aswin Damodar, Chetan Hosmani, Rejith George Joseph, Chris William Ramsey, Kiuk Chung, Jason Roy Rupard
  • Publication number: 20180143857
    Abstract: A determination is made as to whether a value of a first parameter of a first application is to be obtained using a natural language interaction. Based on received input, a first service of a plurality of services is identified. The first service is to be used to perform a first task associated with the first parameter. Portions of the first application to determine the value of the first parameter and to invoke the first service are generated.
    Type: Application
    Filed: November 23, 2016
    Publication date: May 24, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: VIKRAM SATHYANARAYANA ANBAZHAGAN, SWAMINATHAN SIVASUBRAMANIAN, STEFANO STEFANI, VLADIMIR ZHUKOV
  • Publication number: 20180144004
    Abstract: Methods, systems, and computer-readable media for global column indexing in a graph database are disclosed. A plurality of data elements of a graph database are stored. The triples comprise identifiers, column names, and values. The column names are globally scoped in the graph database and are associated with data types. Indices corresponding to the column names are created. A particular one of the indices comprises one or more of the values associated with the corresponding column name. A query is performed on the graph database using one or more of the indices corresponding to one of more of the column names associated with the query.
    Type: Application
    Filed: November 23, 2016
    Publication date: May 24, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Kawarjit Bedi, Piyush Gupta, Xingbo Wang, Sainath Chowdary Mallidi, Andi Gutmans
  • Publication number: 20180139110
    Abstract: Methods and apparatus are disclosed for programming reconfigurable logic devices such as FPGAs in a networked server environment. In one example, a system hosting a network service providing field programmable gate array (FPGA) services includes a network service provider configured to receive a request to implement application logic in a plurality of FPGAs, allocate a computing instance comprising the FPGAs in responses to receiving the request, produce configuration information for programming the FPGAs, and send the configuration information to an allocated computing instance. The system further includes a computing host that is allocated by the network service provider as a computing instance which includes memory, processors configured to execute computer-executable instructions stored in the memory, and the programmed FPGAs.
    Type: Application
    Filed: November 17, 2016
    Publication date: May 17, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Robert Michael Johnson, Nafea Bshara, Matthew Shawn Wilson
  • Publication number: 20180138740
    Abstract: A system for performing computing operations in a data center includes one or more sets of computer systems, one or more primary power systems, and a reserve power system. The primary power systems include a downstream portion that supplies power to at least one of the sets of computer systems. The reserve power system includes switches that switch between supplying a primary power feed and a reserve power feed from the reserve power system through part of the primary power system. An input resiliency switch can switch between supplying primary power or reserve power to support power supplied to the sets of computer systems through the primary power system based upon a primary power feed fault. A power distribution switch can switch between supplying primary power and reserve power to part of the downstream portion of the primary power system to bypass an upstream portion of the primary power system.
    Type: Application
    Filed: January 15, 2018
    Publication date: May 17, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: PAUL ANDREW CHURNOCK, HUYEN VAN NGUYEN, KELSEY MICHELLE WILDSTONE, PATRICK HUGHES, NIGEL MCGEE
  • Publication number: 20180137121
    Abstract: The present disclosure relates to content identification based on dynamic formation of group profiles. The group profile can be associated with some organization criteria, such as physical location, temporal attributes, etc., and can be configured with various preferences and restrictions. Subsequent to the initialization of the group profile(s), individual users can be associated with the group. Based on the addition (or subtraction) of users, the established group profiles are updated or modified according to one or more attributes of the individual profiles of the users associated with the group. Content of interest to the group can be identified using the group profile information. The identified content can also be provided to the individual users associated with the group. The group profile can be updated based on group membership change or feedback to identified content, and additional or alternative content recommendations can be provided.
    Type: Application
    Filed: January 3, 2014
    Publication date: May 17, 2018
    Applicant: Amazon Technologies, Inc.
    Inventor: Tarun Agarwal
  • Publication number: 20180129992
    Abstract: A labor planning application determines a labor plan including an allocation of workers to multiple processing stages during a shift based on estimated work for processing by the respective processing stages during the shift. The labor planning application determines a quantity of workers scheduled for each shift based on a shift schedule. The labor planning application determines the estimated work for each processing stage based on a schedule of deliveries for the location.
    Type: Application
    Filed: December 13, 2013
    Publication date: May 10, 2018
    Applicant: AMAZON TECHNOLOGIES, INC.
    Inventors: Julien Samuel Lord, Michael Lusthaus, Borys Derevyanchenko, Kashif Usmani
  • Publication number: 20180121254
    Abstract: A stream management system may implement dynamic management of a data stream. Utilization data of different partitions of a data stream may be tracked. When routing a data record received at the stream management system, a partition may be dynamically identified for the data recorded. The data record may then be directed to the identified partition. Other management operations, such as repartitioning the data stream or reassigning resources for processing data records in the data stream may be performed based on the utilization data tracked for the partitions.
    Type: Application
    Filed: December 29, 2017
    Publication date: May 3, 2018
    Applicant: Amazon Technologies, Inc.
    Inventor: Gaurav D. Ghare
  • Publication number: 20180114242
    Abstract: The systems and processes discussed herein may identify, prioritize, and recommend new deals to consumers based at least in part on triggering events. A consumer may interact with a previously acquired deal, such as by redeeming the deal, requesting a refund for the deal, etc. Such user interaction may be determined to be a triggering event. Based on a type of the triggering event, the systems described herein may identify one or more new deals having characteristics similar to the previously acquired deal. The one or more new deals may be recommended to a user at the same time or at some time after the triggering event occurs via a website, an e-mail message, an application associated with a user device, a text message, or any other manner that may be used to communicate the new deals to the user.
    Type: Application
    Filed: February 27, 2014
    Publication date: April 26, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Gustavo Eduardo Lopez, Siddharth Arora, Maciej Golonka
  • Publication number: 20180107704
    Abstract: A data store maintaining data may implement reducing input/output (I/O) operations for on-demand data page generation. Log records may be maintained for data pages of data describing changes to the data pages. A coalesce operation may be performed when log records for a data page exceed a coalesce threshold for the data page, applying the log records for the data page to a version of the data page and creating a new version that includes the changes indicated by the log records. An indication may be received to increase the coalesce threshold for a particular data page, delaying to a coalesce operation for the data page according to the increased coalesce threshold. The indication may be received from a storage engine that identifies a delay for the particular data page.
    Type: Application
    Filed: December 18, 2017
    Publication date: April 19, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: PRADEEP JNANA MADHAVARAPU, YAN VALERIE LESHINSKY
  • Publication number: 20180109610
    Abstract: A service provider may apply customer-selected or customer-defined auto-scaling policies to a cluster of resources (e.g., virtualized computing resource instances or storage resource instances in a MapReduce cluster). Different policies may be applied to different subsets of cluster resources (e.g., different instance groups containing nodes of different types or having different roles). Each policy may define an expression to be evaluated during execution of a distributed application, a scaling action to take if the expression evaluates true, and an amount by which capacity should be increased or decreased. The expression may be dependent on metrics emitted by the application, cluster, or resource instances by default, metrics defined by the client and emitted by the application, or metrics created through aggregation. Metric collection, aggregation and rules evaluation may be performed by a separate service or by cluster components. An API may support auto-scaling policy definition.
    Type: Application
    Filed: December 18, 2017
    Publication date: April 19, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: JONATHAN DALY EINKAUF, LUCA NATALI, BHARGAVA RAM KALATHURU, SAURABH DILEEP BAJI, ABHISHEK RAJNIKANT SINHA
  • Publication number: 20180101494
    Abstract: A system that provides virtualized computing resources to clients or subscribers may include an enhanced PCIe endpoint device on which an emulation processor emulates PCIe compliant hardware devices in software. In response to receiving a transaction layer packet that includes a transaction directed to an emulated device, the endpoint device may process the transaction, which may include emulating the target emulated device. The endpoint device may include multiple PCIe controllers and may expose multiple PCIe endpoints to a host computing system. For example, each PCIe controller may be physically coupled to one of multiple host processor sockets or host server SOCs on the host computing system, each of which exposes its own root complex. Traffic received by the PCIe controllers may be merged on the endpoint device for subsequent processing. Traffic originating at one host processor socket may be steered to the PCIe controller to which it is directly attached.
    Type: Application
    Filed: December 11, 2017
    Publication date: April 12, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Mark Bradley Davis, Anthony Nicholas Liguori
  • Publication number: 20180094966
    Abstract: A variety of types of sensors may be placed in or on an interior or exterior surface of a vehicle. The sensors may capture various kinds of data such as time-of-flight data, weight data, image data, and so forth. The sensor data may be transmitted via one or more network connections to a device configured to process the sensor data to generate load characteristic information. The load characteristic information may be indicative of one or more characteristics of a vehicle load of the vehicle such as a space utilization characteristic.
    Type: Application
    Filed: March 4, 2014
    Publication date: April 5, 2018
    Applicant: Amazon Technologies, Inc.
    Inventor: John Nicholas Buether
  • Publication number: 20180095774
    Abstract: In a multi-tenant environment, separate virtual machines can be used for configuring and operating different subsets of programmable integrated circuits, such as a Field Programmable Gate Array (FPGA). The programmable integrated circuits can communicate directly with each other within a subset, but cannot communicate between subsets. Generally, all of the subsets of programmable ICs are within a same host server computer within the multi-tenant environment, and are sandboxed or otherwise isolated from each other so that multiple customers can share the resources of the host server computer without knowledge or interference with other customers.
    Type: Application
    Filed: September 30, 2016
    Publication date: April 5, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Islam Mohamed Hatem Abdulfattah Mohamed Atta, Mark Bradley Davis, Robert Michael Johnson, Christopher Joseph Pettey, Asif Khan, Nafea Bshara
  • Publication number: 20180095670
    Abstract: Methods and apparatus are disclosed for securely erasing partitions of reconfigurable logic devices such as FPGAs in a multi-tenant server environment. In one example, a method of securely erasing an FPGA includes identifying one partition of previously-programmed resources in the FPGA, erasing the identified partition by storing new values in memory or storage elements of the identified partition, and storing new values in memory or storage elements of additional external resources electrically connected to the integrated circuit and associated with the identified partition. Thus, other partitions and subsequent users of the identified partition are prevented from accessing the securely erased data. A configuration circuit, accessible by a host computer via DMA, can be programmed into the FPGA reconfigurable logic for performing the disclosed erasing operations.
    Type: Application
    Filed: September 30, 2016
    Publication date: April 5, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Mark Bradley Davis, Erez Izenberg, Robert Michael Johnson, Asif Khan, Islam Mohamed Hatem Abdulfattah Mohamed Atta, Nafea Bshara, Christopher Joseph Pettey
  • Publication number: 20180088992
    Abstract: A multi-tenant environment is described with configurable hardware logic (e.g., a Field Programmable Gate Array (FPGA)) positioned on a host server computer. For communicating with the configurable hardware logic, an intermediate host integrated circuit (IC) is positioned between the configurable hardware logic and virtual machines executing on the host server computer. The host IC can include management functionality and mapping functionality to map requests between the configurable hardware logic and the virtual machines. Shared peripherals can be located either on the host IC or the configurable hardware logic. The host IC can apportion resources amongst the different configurable hardware logics to ensure that no one customer can over consume resources.
    Type: Application
    Filed: September 28, 2016
    Publication date: March 29, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Mark Bradley Davis, Asif Khan, Christopher Joseph Pettey, Erez Izenberg, Nafea Bshara
  • Publication number: 20180089119
    Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a plurality of reconfigurable logic regions. Each reconfigurable region can include hardware that is configurable to implement an application logic design. The host logic can be used for separately encapsulating each of the reconfigurable logic regions. The host logic can include a plurality of data path functions where each data path function can include a layer for formatting data transfers between a host interface and the application logic of a corresponding reconfigurable logic region. The host interface can be configured to apportion bandwidth of the data transfers generated by the application logic of the respective reconfigurable logic regions.
    Type: Application
    Filed: September 29, 2016
    Publication date: March 29, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Asif Khan, Islam Mohamed Hatem Abdulfattah Mohamed Atta, Robert Michael Johnson, Mark Bradley Davis, Christopher Joseph Pettey, Nafea Bshara, Erez Izenberg
  • Publication number: 20180089249
    Abstract: Distributed system resources may be managed by applying user created policies to the resources. To ensure that valid policies are applied, remote validation for the policies may be implemented. A validation event for a policy may be detected. A remote validation agent may be identified for the policy and a validation request sent to the remote validation agent that includes information for validating the policy. The remote validation agent may return a validation result for the policy. If valid, a policy action that triggered the remote validation event for the policy may be allowed. If invalid, the policy action that triggered the remote validation event for the policy may be denied.
    Type: Application
    Filed: September 23, 2016
    Publication date: March 29, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: Brian Collins, Zachary Mohamed Shalla, MARVIN MICHAEL THEIMER, John Petry, Michael Hart, Serge Hairanian, Anders Samuelsson, Salvador Salazar Sepulveda, Ji Luo