Patents by Inventor Rickard Sjoegren
Rickard Sjoegren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230357698Abstract: A computer implemented method for detecting foam on the surface of a liquid medium contained in a vessel is described. The method including the steps of receiving a sample image of at least a portion of the vessel comprising the liquid-gas interface and classifying the sample image between a first class and at least one second class, associated with different amounts of foam on the surface of the liquid. The classifying is performed by a deep neural network classifier that has been trained using a plurality of training images of at least a portion of a vessel comprising a liquid-gas interface. The plurality of training images may comprise at least some images that differ from each other by one or more of: the location of the liquid-gas interface on the image, the polar and/or azimuthal angle at which the liquid-gas interface is viewed on the image, and the light intensity or colour temperature of the one or more light sources that illuminated the imaged portion of the vessel when the image was acquired.Type: ApplicationFiled: August 13, 2021Publication date: November 9, 2023Inventors: Jonas Austerjost, Jens Matuszczyk, Robert Soeldner, Rickard Sjoegren, Christoffer Edlund, David James Pollard
-
Publication number: 20230360200Abstract: A method for monitoring one or more live cells includes capturing a non-fluorescence image of a sample that includes one or more live cells that further contain fluorescent protein-based nuclear translocation reporters (FTRs), capturing a fluorescence image of the FTRs in the live cell(s) in the sample, identifying, via a computational model, nuclear pixels of the non-fluorescence image that correspond to nuclei of the live cell(s), identifying, based on the nuclear pixels, first pixels of the fluorescence image that correspond to the nuclei and second pixels of the fluorescence image that do not correspond to the nuclei, and calculating, based on first intensities of the first pixels and second intensities of the second pixels, a metric representing a first amount of the FTRs located within the nuclei of the live cell(s) and a second amount of the FTRs not located within the nuclei of the live cell(s).Type: ApplicationFiled: May 9, 2022Publication date: November 9, 2023Inventors: Elsa Sörman Paulsson, Christoffer Edlund, Grigory Filonov, Cicely Schramm, Rickard Sjögren
-
Patent number: 11803963Abstract: A method of analyzing images of a biological specimen using a computational model is described, the method including processing a cell image of the biological specimen and a phase contrast image of the biological specimen using the computational model to generate an output data. The cell image is a composite of a first brightfield image of the biological specimen at a first focal plane and a second brightfield image of the biological specimen at a second focal plane. The method also includes performing a comparison of the output data and a reference data and refining the computational model based on the comparison of the output data and the reference data. The method also includes thereafter processing additional image pairs according to the computational model to further refine the computational model based on comparisons of additional output data generated by the computational model to additional reference data.Type: GrantFiled: November 17, 2020Date of Patent: October 31, 2023Assignee: Sartorius BioAnalytical Instruments, Inc.Inventors: Timothy Jackson, Nevine Holtz, Christoffer Edlund, Rickard Sjögren
-
Publication number: 20230260083Abstract: A computer-implemented method is provided for processing images. The method can include down-sampling a plurality of first images having a first resolution for obtaining a plurality of second images having a second resolution and training an artificial neural network model to process an input image and output an output image having a higher resolution than the input image.Type: ApplicationFiled: July 8, 2021Publication date: August 17, 2023Inventors: Christoffer Edlund, Rickard Sjögren, Timothy Dale, Gillian Lovell
-
Publication number: 20230215195Abstract: A computer-implemented method is provided for analyzing videos of a living system captured with microscopic imaging. The method can include obtaining a base dataset including one or more videos captured with microscopic imaging with at least one of the one or more videos including a cellular event, and cropping out, from the base dataset, sub-videos including one or more objects of interest that may be involved in the cellular event. An artificial neural network (ANN) model can be trained using the plurality of selected sub-videos as training data, to perform unsupervised video alignment, a query sub-video can be aligned using the trained ANN model, and a determination can be made whether or not the query sub-video includes the cellular event.Type: ApplicationFiled: May 19, 2021Publication date: July 6, 2023Applicant: Sartorius Stedim Data Analytics ABInventors: Rickard Sjögren, Christoffer Edlund, Mattias Sehlstedt
-
Publication number: 20230077294Abstract: Methods for monitoring, controlling and simulating a bioprocess comprising a cell culture in a bioreactor are provided. The methods comprise obtaining values of one or more process conditions for the bioprocess at one or more maturities, and determining the specific transport rates of one or more metabolites in the cell culture using the values obtained as input to a machine learning model trained to predict the specific transport rates of the one or more metabolites at a latest maturity of the one or more maturities or a later maturity based at least in part on the values of one or more process conditions for the bioprocess at the one or more preceding maturities. The methods further comprise predicting one or more features of the bioprocess based at least in part on the determined specific transport rates. Systems, computer readable media and methods for providing tools to implement such methods are also provided.Type: ApplicationFiled: January 14, 2021Publication date: March 9, 2023Applicant: Sartorius Stedim Data Analytics ABInventors: Christopher Peter McCready, Brandon Corbett, Rickard Sjögren, Frida Nordström
-
Publication number: 20220318668Abstract: A method is described for training a machine learning model to predict virus titer from an image or a sequence of images of a cell culture containing a virus population. The trained machine learning model allows a prediction of virus titer to be made much earlier than in the standard virus plaque assay, for example in 6 or 8 hours after initial inoculation of the cell culture with the virus sample. The method includes the steps of: (1) obtaining a training set in the form of a plurality of sets of images of virus-treated cell cultures from a plurality of experiments at one or more time points from a start time t0 to a final time tfinal, (2) for each experiment, recording at least one numeric virus titer readout of the virus-treated cell culture at the final time tfinal, (3) processing all the images in the training set to acquire a numeric representation of each image, and (4) training one or more machine learning models to make a prediction of a final virus titer on the training set numeric representations.Type: ApplicationFiled: March 31, 2021Publication date: October 6, 2022Inventors: Michael W. OLSZOWY, Oscar-Werner REIF, Johan TRYGG, Richard WALES, Rickard SJÖGREN, Christoffer EDLUND
-
Publication number: 20220309089Abstract: A computer-implemented method is provided. The method may comprise: obtaining at least one document to be classified; classifying, using a machine learning model including an artificial neural network (ANN) and an attention mechanism, the at least one document into at least two classes; determining, for each of the at least one document, a confidence value of the classifying, based on one or more outputs of one or more nodes comprised in the ANN; assigning, to each of the at least one document, based at least in part on the confidence value, one of at least two categories that are associated with different degrees of credibility of the classifying; and providing for display one or more of the at least one document with: the assigned category and attention information that indicates significance of one or more parts of each document provided for display in the classifying of said document.Type: ApplicationFiled: March 25, 2021Publication date: September 29, 2022Inventors: Alexander SUTHERLAND, Rickard SJÖGREN, Astrid STEHL
-
Patent number: 11422355Abstract: A method is disclosed for acquiring a single, in-focus two-dimensional projection image of a live, three-dimensional cell culture sample, with a fluorescence microscope. One or more long-exposure “Z-sweep” images are obtained, i.e. via a single or series of continuous acquisitions, while moving the Z-focal plane of a camera through the sample, to produce one or more two-dimensional images of fluorescence intensity integrated over the Z-dimension. The acquisition method is much faster than a Z-stack method, which enables higher throughput and reduces the risk of exposing the sample to too much fluorescent light. The long-exposure Z-sweep image(s) is then input into a neural network which has been trained to produce a high-quality (in-focus) two-dimensional projection image of the sample.Type: GrantFiled: July 22, 2020Date of Patent: August 23, 2022Assignee: Sartorius BioAnalytical Instruments, Inc.Inventors: Timothy Jackson, Rickard Sjögren, Christoffer Edlund, Edvin Forsgren
-
Publication number: 20220026699Abstract: A method is disclosed for acquiring a single, in-focus two-dimensional projection image of a live, three-dimensional cell culture sample, with a fluorescence microscope. One or more long-exposure “Z-sweep” images are obtained, i.e. via a single or series of continuous acquisitions, while moving the Z-focal plane of a camera through the sample, to produce one or more two-dimensional images of fluorescence intensity integrated over the Z-dimension. The acquisition method is much faster than a Z-stack method, which enables higher throughput and reduces the risk of exposing the sample to too much fluorescent light. The long-exposure Z-sweep image(s) is then input into a neural network which has been trained to produce a high-quality (in-focus) two-dimensional projection image of the sample.Type: ApplicationFiled: July 22, 2020Publication date: January 27, 2022Inventors: Timothy JACKSON, Rickard SJÖGREN, Christoffer EDLUND, Edvin FORSGREN
-
Publication number: 20210350113Abstract: A computer-implemented method for analysis of cell images comprises obtaining a deep neural network and a training dataset, the deep neural network comprising a plurality of hidden layers; obtaining first sets of intermediate output values that are output from at least one of the plurality of hidden layers; constructing a latent variable model using the first sets of intermediate output values, the latent variable model mapping the first sets of intermediate output values to first sets of projected values in a sub-space that has a dimension lower than the sets of the intermediate outputs; obtaining a second set of intermediate output values by inputting a received new cell image to the deep neural network; mapping, using the latent variable model, the second set of intermediate output values to a second set of projected values; and determining whether the received new cell image is an outlier.Type: ApplicationFiled: September 5, 2019Publication date: November 11, 2021Applicant: SARTORIUS STEDIM DATA ANALYTICS ABInventors: Rickard Sjögren, Johan Trygg
-
Publication number: 20210334656Abstract: An example method comprises receiving a new observation characterizing at least one parameter of an entity; inputting the new observation to a deep neural network having hidden layers; obtaining a second set of intermediate output values that are output from at least one of the hidden layers by inputting the received new observation to the deep neural network; mapping the second set of intermediate output values to a second set of projected values; determining whether or not the received new observation is an outlier with respect to the training dataset based on the latent variable model and the second set of projected values, calculating a prediction for the new observation; and determining a result indicative of the occurrence of at least one anomaly in the entity based on the prediction and the determination whether or not the new observation is an outlier.Type: ApplicationFiled: September 5, 2019Publication date: October 28, 2021Applicant: SARTORIUS STEDIM DATA ANALYTICS ABInventors: Rickard Sjögren, Johan Trygg
-
Publication number: 20210089750Abstract: A method of analyzing images of a biological specimen using a computational model is described, the method including processing a cell image of the biological specimen and a phase contrast image of the biological specimen using the computational model to generate an output data. The cell image is a composite of a first brightfield image of the biological specimen at a first focal plane and a second brightfield image of the biological specimen at a second focal plane. The method also includes performing a comparison of the output data and a reference data and refining the computational model based on the comparison of the output data and the reference data. The method also includes thereafter processing additional image pairs according to the computational model to further refine the computational model based on comparisons of additional output data generated by the computational model to additional reference data.Type: ApplicationFiled: November 17, 2020Publication date: March 25, 2021Inventors: Timothy Jackson, Nevine Holtz, Christoffer Edlund, Rickard Sjögren
-
Publication number: 20200074269Abstract: A computer-implemented method for data analysis is provided.Type: ApplicationFiled: September 5, 2018Publication date: March 5, 2020Inventors: Johan Trygg, Rickard Sjoegren